var/home/core/zuul-output/0000755000175000017500000000000015145024362014527 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015145031541015470 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000324570015145031356020264 0ustar corecore2ikubelet.log]o[=r+BrEZƐȒ!ɦ[M cSy-Hgf1pgʝI(&mow|_v-VgY񎷷?.y7?]ݾ}zi^^|6Jr_>c^*߶Y٬:|fu<ۭ_x~̎+ޜ/8_poL_bڞֻ];YoZO(_-V,<xnƙQʀClxv< |N ?%5$.zٶ'p~U Pm,UTV̙UΞg\ Ӵ-$}.Uy82¨(PQ 7Fh k0&S V1M.*x6Ql"%qYHzn4}"|dd#)3c 0GHw A57&Q"ԉQIF$%* 2B6K$*/Gmt΍Lς1 %T%e6I[wdt6o[ .`:J ]HmS>v5gCh31 )Kh3i J1hG{aD4iӌçN/e] o;iF]u54!/^s?Hs,&,#hB\;ErE& S/ZXHB+Wy|:iZ~hal{y*:{]1o:햂b;-$JRPŃ*Լf`i7on)&c(^0"$5ڪҾη*t:%?vEm5tq3Cyu G~qlN?~| nLFR6f8zW_yYd ;s44|Cs4U:O񨡺ePӋ&6jGnL!?lJJYq=Wo/"IyQ4\:z| 6h6dQX0>HTG5Q/uxUe 1ė/5άRIo4T0ٔfH_W 7x7{VkϏSݰFNw`i7LdW3Rʕ"uZ0E`,u{C'F\ъ.x3M2ֻx<ć;_ʧNs9[]zC.&Xz$AX0-B-lNv*_]d3N^[-הp|A*Z*}QJ0SqAYE0i5P-$̿=_d^"]}Z|)5rC jof'(%*݆^J">AMMQQ؏*NL ߁NPi?$;'#&立q\ >hl%}Р`sMCכAztԝp ,}Nptt%q6& ND lM=ָPZGa(X(2*91n,5/0KN_Ď6>Bߔ)bQ) <4G0 C.eTEZ{(¹:-“lՐ0A_F叻l_}z?1:NWClΥ:f 3 JJ5Z| &W.O{,Z8Y CEO+&HqZY PTUJ2dic3w YQgpa` Z0΁?iMPc_Ԝ*NBS` R:~U jځU~oN9xԞ~Z%>^CNfW)F5d,0SSN#s ayiBq)u%'4 yܽ y[0̿2ZҘ[a-0V&2D[dwl*?%|L pSRޔ8NFz|&8@2ƭ1-RN%?i¸ `eH&MJ!&ᙢ(<<-ja0Tazkm{ GYәW}U>>a~ W;D=;z|AAYGO"葋>hIQ\%$:b$spd.ZrͰ4j8!*(jPcǷ!)'xOm| b- 9ܯj 5EM9T3\ܧk U։njs2 (Қh)V}fr3,4BoXkkJ_+>6JYǹ>zs;tc.mctie:x&"bR4S uV8-0%X8Ua0NET݃jYAT` &AD]Ax95mvXYs"(A+o_ *{b }@UP*5ì"M|܊W|uJ{mL=dN'DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1o j/}KXg%q3Iͤ39(&ʤdH0Ζ@.CPS`xiP(.T)#ia-64Fg ʹ7TWǃb!' K#XoV甬6xڂ I &m>AtĘ5dw9}ŒEalvVZ߿c}!O,ƍ7ͱ?9].ۿ뺶ypy͟מs{(99x9O6]tGLS0l/LOKcQ.os2% t)Eh~2p cL1%'4-1þh[;:>OM=y)֖[Sm5+_'cjf `~ߛUIȏvl.4`P{h056 9wo ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !Q[;4j39]WiZSس:$37}o$[4x<I; >d[5Z=4>5!!T@[4 1.x XF`,?Hh]b-#3J( &uz u8.00-(9ŽZcX Jٯ^蒋*k.\MA/Xp9VqNo}#ƓOފgv[r*hy| IϭR-$$m!-W'wTi:4F5^z3/[{1LK[2nM|[<\t=3^qOp4y}|B}yu}뚬"P.ԘBn방u<#< A Q(j%e1!gkqiP(-ʢ-b7$66|*f\#ߍp{8sx[o%}wS`ýͽ>^U_S1VF20:d T2$47mSl*#lzFP_3yb.63>NKnJۦ^4*rB쑓:5Ǧ٨C.1`mU]+y_:,eXX맻c5ޖSwe݊O4L)69 War)|VϟT;Cq%KK-*i ѩQٰ`DݎGu( 꿢\cXn }7Ҫa nG{Y bcWa?\34 P U!7 _* kTuwmUr%ԀjƮĀdU#^ۈӕ3ΊeBO`^}ܖj49lnAvoI "%\;OF& wctغBܮl##mϸ.6p5k0C5PdKB g:=G<$w 24 6e/!~߽f)Q UbshY5mseڠ5_m4(sgz1v&YN2姟d4"?oWNW݃yh~%DTt^W7q.@ L⃳662G,:* $: e~7[/P%F on~$dƹɥO"dޢt|BpYqc@P`ڄj҆anCѢMU sf`Yɇك]@Rɯ?ٽf? ntպ$ˣ>TDNIGW .Z#YmDvS|]F)5vSsiExţ=8#r&ᘡĩDȈ\d cRKw*#zJ9tT :<XK*ɤwoJarExfKB4t@y[6OO6qDfEz]1,ʹB֒H ֱw;SpM8hGG&ƫEJި_1N`Ac2 GP)"nD&D #-aGoz%<ѡh (jF9L`fMN]eʮ"3_q7:.rRGT;}:֪a$)gPSj0j3hLư/7:D-F۶c}87uixoxG+5EekV{:_d* |a%ĉUHSR0=>u)oQCC;^u'y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$sޕ6ql?N/e1N2i)i_TA|S2G4miBȨHM(2hys|F 94 DNlϒòκ-q|xC ,gKDzHR%t+E/wd#礱ºȄWEz o\JξB.wLKZ39(M +(PWՇfR6#ю3Ȋt ݪbh]MTw䀩S]'qf&)-_G;"1qz퇛0,#yiq$ՁɄ)KٮޓJ|̖D?:3mhW=rOf'/wѹ8BS8]`;=?,ڼ"ϴq*(A7? /W= #^ub"6q f+=^OI@߱^F[n4A#bYѤwd)J^Z{*ǥzw73LuaVad=$6)iI gC~.1%YmҪ+2gSt!8iIۛ*JgE7LGoş\bC}O i ycK1YhO6 /g:KT sPv6l+uN|!"VS^΄t*3b\N7dYܞLcn3rnNd8"is"1- ޑܧd[]~:'#;N(NknfV('I rcj2J1G<5 Nj̒Qh]ꍾZBn&Un' CyUM0nCj.&Oڣg\q0^Ϻ%4i" ZZG>Xr'XKc$2iσֹH<6N8HSg>uMik{Fm(W F@@{W+ߑ?X2hS4-=^YgpUHެbZ!y!ul@ڼ63" ۩:6=TZõ$E,ϓRV|G&$rr;J TtIHFE=RȬ]P pLm|?$%>Eü%mWO[>Xmw,*9.[G n >X8Ī;xW%dT:`ٓ~:QO,}j6j!yڦʲT:Pqҋh] H+&=>g| Z;D8ܶb:! Å{2:+au 6:!fF+0#+̬NY"!6a7#񕪰%:r|o5Znڧs?si/W qEU馥˟^_޶oڷOj'?nc]Rn\t3^邳塨Lɏ"8k8M~?M}OAH$77f|lgn I;.K*!<+"eK5c&`X:#;@B@[(K44sBFu M.MNWLlY]K᜴=/ VމYlϿ4i36$>m|_>9|dUA"{!$jKx E$K3hN(tÊ-#v#O N, 9g80Ǭ&VdӞ5W1!1KYd`,-*&>F~⯰&jb.~cNk BL_OG]Bv.A|'qT(Ol.' 4IE|@Iі)<-p JkQm1 `qacܗVc?)cl*&<}P媠E{-sVU>߇GUt\+n3X]Byoz)li$2cPs6D>TE-n# rve{椱I |p)U݋7yJw&PzDgi xs  xh\L r Ѥo Zt(I >|$>tnMdэo%YdE6c>1ƕ (0W4Q>@>lWN"^ X5G-nm.8B>NOI[31,j2 Ce |M>8l WIf|\q4|UkC.gr`˱Lϰ} xr.~l-ɩܿ* %xccDa.E h :R.qɱMu$ơI8>^V Y. ,BLq~z&0o- ,BLqfx9y:9244ANb n\"X>Y`bb*h%)(*_Gra^ sh6"BzƾH( ."e)B QlKlXt҈t9՚$ضz]'.!-r"1MCĦʸ"66pE{ =CNc\ESD[T4azry !5yY~ :3;Y[Iȧ q:i Ǟ/"8Wxç,vܰtX-LE7 |-D`JLw9|fb>4Nu ߏ3ap5k_JA+A.A~ C~`[KaQ-Ģn9ѧf q:cT >toY X]j?-ȇlCf0hM`~ ó}0W@o  K[{d+`ze"l |d;L2k%x90ݙ^Oe ]nHfS+.4<#/5߁ݛǪ0q,7FeV/!; 瓠 Li% z}ɯww"O-]J`sdN$@"J`Y13K/9`VTElsX|D^c%֯T][$m;ԝ!,Z5f`XFzȁ=nrSA8; P=uY}r/27OUa%~0;үM3Tu ȩ*'3IC~LG,п.?C3tBYpm_g.~>3ʄ55[c&#Wgy_jVo,s O*i\[cpMY<,"˘V܉T6nn¯ \_?V_V}nn sn.*upw pX\_ fp~ݸ#qo >!|͸dqUߓ+-c_2n-B}mml t7 DtDw׃w4<\A ڄ}~ %lBeH1Z8H-^1e'yeIC~ W R 8/ZnRfH1_G9(ΟSYpŘ-ŦΣ8N,౬}xAX4xM"5XITd E$ZkNb۩r`fC`kQU``%NĀVecK[ld-'Ó5hjsa*MDpa.% qZBh𒄓(#~ |ؐ3$ "6meYO>Y?> (<2y. ">8YAC| w&5fɹ(ȊVã50z)la.~LlQx[b&Pĥx BjIKn"@+z'}ũrDks^F\`%Di5~cZ*sXLqQ$q6v+jRcepO}[ s\VF5vROq%mX-RÈlб 6jf/AfN vRPػ.6<'"6dv .z{I>|&ׇ4ĂO4 [P{]"}r1殲)ߚA 2J1SGpw>ٕQѱ vb;pV ^WO+1tq61W vzZ U'=҅}rZ:T#\_:ď);KX!LHuQ (6c94Ce|u$4a?"1] `Wa+m𢛲`Rs _I@U8jxɕͽf3[Pg%,IR Ř`QbmүcH&CLlvLҼé1ivGgJ+u7Τ!ljK1SpHR>:YF2cU(77mGG\ m#Tvmە8[,)4\\=V~?C~>_) cxF;;Ds'n [&8NJP5H2Զj{RC>he:ա+e/.I0\lWoӊĭYcxN^SPiMrFI_"*l§,̀+ å} .[c&SX( ( =X?D5ۙ@m cEpR?H0F>v6A*:W?*nzfw*B#d[sm$U>tLNÔ+XX߇`cu0:U[tp^}{>H4z 4 (Dt?H-ʐ?sk7iIbΏ%T}v}e{aBs˞L=ilNeb]nltwfCEI"*o/>8.fgW]kY0Cgcu6/!_Ɩ} ' Ў3)X<seWfSv!ؒRKfs%(1Lhrٵ L.] s?I,HCԢ[b C-lLG+@_$c%* _jR|\:dc5u= A@kUc\ǔz;M>dUN/aFRĦ@x؂ǀ$6%}N^ \mQ!%8j0dUo=rh>*YȴU3Q,̸*E%59sTzɟڮ2kg ۱wEUD3uKrr&"B:p`\E)j<).R&#ÃecE,dp"nPS 44 Q8ZƈKnnJei+^z '3JDbSK;*uБ:hF ѹ @˿ޗ~7g9| hLXULi7.1-Qk%ƩJ4^=ple;u.6vQe UZAl *^Vif]>HUd6ƕ̽=T/se+ϙK$S`hnOcE(Tcr!:8UL | 8 !t Q7jk=nn7J0ܽ0{GGL'_So^ʮL_'s%mU+U+ȳlX6}i@djӃfb -u-w~ r}plK;ֽ=nlmuo[`wdй d:[mS%uTڪ?>={2])|Ը>U{s]^l`+ ja^9c5~nZjA|ЩJs Va[~ۗ#rri# zLdMl6o AMҪ1Ez&I2Ww?4.sW\zk﯊溺YTWYT\*6eq_r/YT77WNZ7F_}/򲺺VWQ77V\_v>9"Th I`[>>0ւczoAdh.$@bѨLtT=cKGX nݔ͆!`c|Lu_~ǴT?crO e9d ljB?K_z>p%'3JQK-͗R>KkΤOq,*kTa>x!8 J&[V =͋A,z`S,J|L/vrʑ=}IhM4fG(Ȋ1{TT%41Oa'{6} %``'c8d &ٔmx[ERnʼn398bWvwI$1Z+^xGIXEe'{Jd zw6զHe1V%Vx=ͷ̏U r/oLǪ H` Q *xGybwWؔciUQJr"Z'0ZŨxEYm=K.?Q!T$2V\oDH أ`J y NՌ1V|bp[/w[ %yf Nxt9XoyϣAO= +m|oLThӼ\^<)/Qi݌ r0ή*hp^!:%eXxJ^蔔?9uYhp64iy/mW|u6W}Ҝp󺸲u \ݟqe. Kfhh+_ u6@5Xj؆턁r?&& t0yT5%ϊj6 muP;g8э˲;W ㍯,_jtZrĠ&huOځ?Iq^۱ǹdL7_,!^N𧅁{;Mmb\6s!>fYAפA 5<85!2@S,e\U凾o{w"le*ҾTZJUo"P|m st~ gө֥_j_Ұ&k[)GE3|uըWX#P^qo:TYnc;S [#uX}Oۆ;c?EH~}y*cb||,?f+ Jb ]x-2}wT)Fe28 ?[uU3%-QXP] 6]Ex"JT""_F^Z&*|1-3pb;`h;tڋRE#ci86]5,aj{yBne,n'ezk5Flg(.1hx2Q뭖ѯ"%heg>g^6 y#,ņ:>/ōC @KEٰQ"y"/AD$90+g<*&2cLWTK1 (I 92b|L's!&˪˷{d~9xyUΚ*{#$̊IbxǛZϖJdKJU+A]4a V IXg0K^VOy^.ҢGeQ}ryWI/+n\63/XX|I; DV4%rGEw 8ye_۞ fB6phSqm4}U K &4ä֐>Au-1Fvwښ3j}k(ຸ Wӥ]"TQe=F_լ%uYh]Ux/y"3;yMݵ>h)k&۔'&SϣEq0%wL#txǂFަܑvAjF'1n,jN=~ LixA=]u(IJ?y|*{d7S }߱y]dyH\86!< "T3c]-%+?D+M6zڌ[2jꀁaUgKHcG5$,prݵskn: -Ks iKuq2d:A?3Ș/JDcN~tDQx`~|NQY&ӵ:-y(p#_@?it CB7$KQ4e<%ibW8JHSdN1@TٕHk-(qI4Zp񑞨% mˑ@w8@`xF1C|g$QXdQZ4uzAÞeYDAh?0Mi%3eР0[(=IT&g$4*њV *7"mM#|JTG˰$u"ZS8>}H 寬Yփ$L!j:+iVa-܇VeTj2P5ؖ' gQYe|4)RJٗ%yʬ9J)r i)ht\%x_ωF9R"s9/H3gQ=!yc}()ODsȕ'̈X1Cd7t1D2kK/=?tt 舣,hrf8-' #Hs*Y}$ZZG0(7G}k['ےXb5|yW$J !TMW;xOs1D\jG%y! e#?O&%f/rʃY=t+h+sbWW1:r!iX "u\s路E Kv.x K$^7֒ȪVnwayP=X TO˨}\r}4AYj {4ʮ$7Qa0W[nU ؜玷-Ҭ Y<0S%`TBUcjh..i{R\ cdx A\휦~G %Qb(X)eooŋஇNӈĝoK#h2q>B%>okÜ_ YZ&L=Tu$#q@M6"Px9<T3eX*2!pIFꞄzJCw}]FUmPID/ '6{z"@]=GeLխ= vR` Vi0ݺ+ Ծʼ:2=q ,5gjKɨ&߱C&4ЫM~Wx4S-[DmRy%sSyU2M +p5@ߖ"Z7W*'u܎HԖžކ Cpmh'ؾK0X )(WgM7Y抁rSuo/W (hu:'ɭmE\{v^]n}@`])J]at>d F(+.b ?-BfJe{h*E4d,9E8[bc=C#*s]iTRTSIƁ;f/ͶK 9B?{@X⦞&ƌ9.uN]bFVbA)hTF{;%%_xQd޻f+9?~ϧ3IV^Fq3 u~Z\ؘIS1]y+Z[L7$#C4%F{~P.s}|WoZ+e~̩pV,V].no 23m뜅!w 7 iiJ!Cnk㸦8;Dk2FzxtmA)+IwtݎÓEf/Ji5{7Y;eZBZ*mıDዬH MR~bfޢK:(ܙ[׌pFf:ѺGA$l?08>jAϷ4=9/ w%f"]A,%%ߣ\gf=u=e]5cpi{i5d3iR%216wj$d2oMk|a`CXWȗ"]ذM5/haoɶ`UqٞlȆX(e5,d.IºRTej$w +WvR*˳8[~٪Ջ0ʘ-;# pѦ,I$/ѭę4 27Z˵eUa!dEo>z\Y6ndso؜5a\1W_z@i_(\Twb\AS;bvD2؝>L0io9avqpS)paєpc @!sM;dיּׁ2 0#XPP!W@A,CN9$zru ڔ'_@Mt~vh0zuMU# n"sޚnVH(C`"JSPg`]{)\4 򇳋_w߄$PF'}Qbl_q3S?9<ZNF(a (vt˯>=$Iy7 fm;qI6uw;Nr왉+׏2.ER63#%?Ε~HA y&$_7F>|¿).51X{k It/h|ygF,^{>}Zek\:z x VyœWR#6FdYqV3q~q'Pgɧ~-*6y*ڲl,N0$NW?cD\^¿#xF)l91&ͳ@p)TkU(͟/TGq.x6W{L"" {Sl}ƕlϗO\Ί?"nH/!GH'^|]P{-8\] tc1AnS?xLu~_r m5hJrBHAiך OaW mRP)7uIB_X\f9"%>`n[[mtPZp 0ڎ+_hZ8}0 ﭣ*OQxoچh_a >,u @Exm#{Xӝ_ pG.zkQ@酕Rzעd*vQx-1 pǕh鈯ǎvxb qkG(@ i˝5s;YF^a+@PDt ,xȂV"x]:! kKz-6D@! 'm}Vᑷs/ڔ: Hޅmqgn៷Ӹ0 #иUMTb[h6 mPT^@wpmyr}2UeymQkNװ3n3 @ Z!0u{b:9vfh YzK(<o7O VtKÒìnW0`[Ũem\3J햘BJdർQNށ:3`~@ aTjlvh*N0a` id:GIðc2ŰKwΞѠ=ZSI-KSfkD8""Z|+ Qombݚd u^Pw090#qA`^yl&5kw2go$O@[5nS.& sFbZK9 2G.;$Uqf߲ UYwWG 6&% i+ܮת 'Yzu@+փ-QeLzο˯syЅks)RGwr/i;>Lؙj$2hM'`fVK!6D*tEf#B9΀U\>hA$u}J*ŵV ԐDk@&q^AYѽ.a2lw57q3v&hesVn?av\4C+઼ ۞'Aas%ϲf<ܚ#A +WĉQ^:xCY7eضlϴlvBd[wĥYAϗuIO(T#w-7Mm .x!ag\8![^2TJ~s(<ȸaύAܑ=) pJ@YϨ_ly(fӤ;uRNRNXipBN3.71I,lV=c*ԿAY #& ፺ֲ WD$PJ`seЋjJ7養]f0&TtyN׀sKe#")(dLtPV#~ER k`ooVUJ5ѣ ro sHoeh>+5䖒R 4:dUѵh jEG I9N%rh\0d%qTi [ -͓QIql&f߉=ӷ '^f3?m6fi-Q*-fذXn-3>t`>~`ZNڷlħMqRt Op 7qEYdmCd0MIKG֟!`%e36"rvw֭?k>U^Mc }vfX nu@ \Ӣ<`@q< ny7Xw;xK@Cfdz4%/١ԁy"=PH#. :A2̴N ;9lǂO±?UQSxr5Cnh;Nb,.yhK2ßE t1t RAmũzb&Y"&DCDv뭪Nf tf hE &E,~ ~YD/G9߈ i3>'S $&0<~.urm>x?e8H_<`Θ)/bF4 ad2lA&ee<4 fruûi@='leV7{RP͠C9o1<58 <ԥ `:?qwQ>>͒aD?)04^}1dqO"saw wzeC%XHR Lmx$UU}9)ˀp!XNѧ1ݫG n gق5MGraJIVQ͵B?nϹTU*Wi6gT 'Abk6]rTL) OR*mT^ ~$ @8(uSZ+L`s{Cev/?;**Y8z-gӘDp?J<5rQ$p= @|l +Y?sV:B.kr?JgZ3?լp{SP4&r%apRB8é@O:x~-X)gAÈ%/ fmx2QRf!_^*]3n#Uwkp<ADvPlJ6EW'az 0m@;ePHu,  #]%e#) wcޜi0x-|OfDFzvP-'Ȍ./zFRf(3NyۡA:q){Z7_z]&|:|dM6p]O 2P􋀡~ΪA%:?aă/5$a*'#y꘥Lڐ??X#)jSTm`c?kAbtb٤CD;k#)5o-HdSeIu,dpFL=p1F 3wd闗Yz%," NLd6D#8VzXz,ex=:X>^D66;ȵm6Ĝ_r#zVrkP"Z=ch۴RnŀEYP NU&ǻ`~%p+:85U"XDf\s*$Hŗr4Fqn2HQ*2ށuHx _h J$oY ,):pMOCE莛"B&+Nh)fxLm^h5m?}ir߃|v񷝔;P^ .qw+1$]>a Q /c棼ح|i}Cǯ] o|Ŀ@tи~lBLd/`Ö32gKe[VZR%([;8+U3`eZa >筝1pL0W]oFme]o, Ѣ u2IA&Z읅*𓛘Qk>P$.SͧpDXrY+h"50B%@96q$H:e`ʩ8:#T@(ٴEEqUk8t>P$ndk (!ڎj{m#_"CGư#J}1 y7h\TLk_\ҙI`9prv&-?s0LB *YR+)jyH<ðIm/ `^$"P1H@7WeE@"Ŀ|QT 4ѻLԎ)^vq1c8\!A&O/TcBCgRsov~F@ %IAVݘD"6;57M[Xx@so(4!ZԊ)E11pIŸH8Rh y<aQ 9fa\SRԠ{Aӎ@*h͛ycR"9 y 39ӲzD%ж+ 졵BdGEE{IE*E9HE6tc#I`@]Şw|NW.c0a0݇`S=NxKr)NG?7ZNBu,LbJHK_$ɍ&]߭ۆrB)Z-эr<Ū5`TIViHtZRbn"qFFG0_bJ$#`]/\ Tdeڤ c ">?H-I=p`Ģ0ܿZtrQmWAꬿ΂\nܸ`Y11)v6Z(h]ҊZH:{cPSq>*ZOr{!C~ Ǘ5I08XWO'G/%CON0R7M"5uNP10..\QPdKPdd9}+LtW!|M8#K/x mJKDK*BjYMjD҇ǖ?^nk?SWBEy|o.Xp=9{}{ {*y' ES #diL$H7(99ߋ.G0p1#ƕQ%Jɕ)bvJuR,<$PSjMGeuq N+-nBռ/S;&KZIH D1q ui `j-z se.$1g,%WS ̘ͥth>v ~@{I'OJ8+Un9=HH|*k."78LGQJV YoXp|AvK=m3R-)j* 1H,~U@"ۜ;o-J;yqgWqxB՘HC;Vogʜ[8T;b:[ej-1>}bqrj{3& MԢ"0д9BWVbTR$ Z/0<'&؟IW6"s %5/u Kc̳V@wWLb5{)N9c'LB`R]JD׿- zUxլs<"VXpaQ! 5f4RnĂы?Q&]3vÜ{hKr+,'b1*Zv8$\&&Gڢa79m%SIqpcUV !j)<8;uĭ.(Iě{m2%:Paf%2?fWD6r!E(㱒Zk,HN>}krQ84T qDaDIUrRxqQ8E vQ)q%U''/k1b a0"Avƚ` bHt}}f)^9@:Fa;ɄCËȒ3tfkB@ ?Eeq"Bk'nyF%:WM$'ޤ<́cqQ܄_u,7N˹9{LbZrqE1G 8d@mXբȷ2?qYpzr?VabYx",9P^[IW?o~bqQ<:e cdcLϙ8à@/E/!Ec:ڹ!t(Ө 4T4fYrJ E)G?G3*P|4% F0u$PDHre^>W[{ˈT ۫ơ^e '< pڽs+^XDŽץn@iO唽>jPN ^D ^NK0bLa(L @AP t^\8*GkDzHl}QϾWuB'E⢨P)T'Y6߱{fb_OJ?Vʽ.cJZys;(3ZcGc>URsSVd=g&רeG PO^QB8gq2Ϲ86AI 3FsGe ' h:`FCkcWJ"8$EҜ:'aU40$d# Y)NsbyvĹ76P:3*ݻ{ܱˆKMh8(IFv^mZn2<䯧) W3~Q{/Ky7yu2Ka{Uv\8QV'eWZTQB0q̡3Em#I'a*sZ{j.9jhmC%@9XH:CyGfo[?J(RjypXQjGY:1ds7h ^exLLn0[&9[Ӽ}7GEKIeOb{3fIcP_0z~YK\y&i^Rw=L5֪TF !ZԊrASFI^>yrhU9&բ-sGȷ[L鎭3GlzՓ6ҥ}b J玈(/VA,DQ.GR~Ji͂ ϱqs {f=oupC-{fɤQ7'c 3HBh̼JBK~+R_Wy/̻G*eQ@/u=PN'zLL JT~Q' . '.:٭9 |qMƘ""d#t|{N[)?zwϗVb IB55 A!=ʦE z(N,%A,8Nh|ưͤ8 |O1Z1QN~^Yp?mI+,W̮ePojκv&Jf͋^KxS߯G$,ƙIٲHոNziR.&pvG 7N| t# nP-Źshrn#82թ&gЅۄEͤ|Me{LI2?US+^\_ɖbؑkCLj{]<n&t˞c'3=\jFWk*iRD{-%MΩܸeo8F3E1Hv355"qFA3B#[3#AtѓZ!X%3)7n+vteظGI»vU[S6"L~hb@CW#[7㫖[ȜWRHB}q1th?4 = @ga5Yx2}S`$\q~N+7\3{q|͛> V)WCt Al\DO`Ch|S̲pvhVƢM j"/ߝe[0"'/H+E_;ho4v7*S@w"Q٪z5gC?> `@zèսsFYp/]e,ݒ%LMIq ?9tlǃDZn@"JF}U,Zzm87'ERFMPQҹR_Gg0OV孚1_3סwdj ^QhV;h_<`K p)^xd&潟Oӟ^ܫxݛ⪟ pwn PeE.~YΪã TfU❏PafW5dy9M^x`ʲ]LxAG<I Mp< d|P P/|zl7?B{A?IvPjDTxM +?7\xkK\ A\:0a%/'j fj/w)n-߃{Ds4B$B0\ѳ`F:+O"4^M5᎙!,oZ Nj2(79]-ˢYt׳b<쯻LmE¦y-Ҋh}oBY^׬MJ`f%LMFq&0f8 @tlg#t/K{jhZM\\svt҆Bs]Na^W# ^4#Ic{vW.8sޤ><@ԽG8=s-87"Ӫm΢iBV߫kvL&fzqw<=*<p5A!^2/;Xn\L:+VվڽrD|^8i+-v%76OCq8455E6$dݴ$،Qp^R9 _>ˣ'PEM{Cc/]1Å"4~T^{==aa+uKyݳ1^ | J +- vu䌀pF΂;#gA\.2rҲF|J` DG0#)!IFȁ5G&=H^xyT&-HI;:e[^'UPIr!`vqF6MMYb[JXZJn %qIIwKjD; I!mP.i0hJ46#,h# ./!p]w uAiR&lVmF .hJRu- @-OCڠ16hJ uhoI6o|,Ss p A\#($X%Ts4˱dMg^NC>$,Uv* v?*#rdxH0 }ȵ_N0*Sdfpz!,%Dgv<+̍j5! Rː/~ݻzT8h57 ]Xv/Q˸׀]i/%L(3+R2w"ݻ&Uu,*+<9I0b;,[ L`jQ6{,Rq⹠1& k{anC' b바H^yFriR[IQ\̍5[eAʆeB^CD7%0IH.:"R'Rmq 9=-J~-6 I0Mr{=9ʿ|8Bq猔(B{2'E{-{[dj4TlYW>*1=5<+r1[t2/\k5}E>lc=+[)¶zkN_\zGS0.Iw% }%:d0V]7I2Ђve8HO@az'jᵿ+@h78}v@?\/fNJfPR˙Ǡqa$dߥWKH©-jh?mCSƮ7xQ4ր_'b} ukc)'Px@P;>$'"  )s6 M._\ gˣ]\7?Slc{97FSJGFڐr? ޡ` Oick=,u =S!9muj[g3 晢@Ё>!6uղm?%$ޭT/cm=*^^7ƕrjPfUK'9uwq5iAZv"#F_! olcnjlȵ:% ,@=wބ56-z^8x>__\M|%ѫxAn͟(^@| E\D}QFs"XPai*,V7ёO$Gnζ) %. T㿦ܛJ]q;ȓnjvSp{DXRMPxkiS԰2QR,^Z3VJ3VB34mDjOqEgjeHRaBBdžLQSxYoljGPҚѶVѶĜaH~ۯ_8lقLf?SBLX&vs(:UT*B'klR(ʰP]t?k[Q~ֶ!?f,7jj;I4;yd'd)k ѷ3yK-m+jm+td̎A? fwiv̻sFtf/H+vc E,ޕۃ~lɆaB l2< Rii" i͓~'IМdr"t%r:.@3Ho(#1&OgJ=y ,UVPiZyaECˠv P~=}4.;@JEs] 3rM. jagMyӃ9TRnJ[F^OEvjfZv1o7Ec[>Dx~`Y7!ʴz6o2` W5CJ"N7X/jOeI()˕,c`U96w4ow)n'[1DZ2D3+1ČO>;t '~vf%qz3uW ,OlLn%H~3@Hc(dٙCe"k'#;Ye )9i0ٹ9$$Ds-$g9$d-¹3.*V 4ߗJ'&) Ɋ&m26fxEj> %HLC]uU~]| Ogy5krՖcXx#s2j@ v,QbϘc%+3ęZٌ)Zo3gT@Ϩ m'sgt`|+]H3 A wY@`. VtVӆ[BFXG4Be|(0vhTd!;x?)4S3s9ƊZx}J XiA8.I(A^l6Q>LGߧ{jկ>YCK~~'|@!}L/)?Wtō_/T?w0J )/995IzA謕Ydx7t \k%2k(8l]qWbh~ Pr-J_zw~2@X0n@|h }N,"&4v'@GjZAHh4Q"5c&u%bgA WR>/ /e̛L!̉XiG2lG^#2QR {wWk>qS')VȦ8@(>sa OdxT&sV?{۶ !, ֔ 6[)Z67HZ?ck#K(9q|=g(Km٤)b֨XE9gk(!@pa:.yB) SJSHswfq,])۾=j2b :d`EcܮjMЂ ӞβI`;  zx9<\j,, 2JwdUGkFiT?*j)'=r*=bZl8 "\aôGXQ?ɩd_{?qUž}VЧK vO-=>w ˪m}Zz6j@ISKaity(yj׊qS;Cqi'j,ӕT|/c4+H-D%y馆 &mT)2M #"Us"2Mt"mf[3$#f݈|)TOb֭GKqҤU+ѹ]%  Z˭w:SƔaYʄS1ƜҰSIB$ˉB㬕[Ied*NzSaZJ6G\z-e;lK=9ڠwbd=*CoCa{큡}bhuFhPcEط? :#N LM#U#U WD"Ƈ IkO6#MrƔ&sa]%)Ɉrr^gNt&cqnbфL|qk4Tr47u[AV| e<Dz6Z񎜊f0L^IPF8wJUtNՠmrD4w1IL+It*gZJ)L3x[T"8Qkr1aUS`7v+hS;726]M5kv@G `E7(rЦU_)P-grrr/}Pd[$%gN%'=G \qւ˪e{# m9h>k})ܪ9cQ24,rpל8j$.,ʩNxzS(MqBJx3MPf3TԂ!y$vq:,K CԒD8ZARENvCmW#fTLt`u\RAtApnTWajiC 2R=澼P`rM*H}kD#vl@ ֧35ɭB5QyKGC(0!:ߩ6o א.Wfh">jخE))G+ZK+G4Y#f5{WԎ7/>z= !:k::Z-E UoURp4i֧QKi &#ʈ΅`-i }VK eKqe_&9vo\2/7{Op6{M(x[|u<d\̖g?>RQ}Z.^s|)u.sY _ZD2:ny[E8d]>o1 TmƟ l__Y(y>x{ee:7}7;w7XaV6~ޮM9Q 6C: dcCn ۝k[}aNn?i\3ە&nK|QnQ xQ5r21*hq:u ; Ǔp+8^43g\}̈́M'x~Nj}ql|35I:8b~ Hޑl*ݽlXKѱlkFDZq5E` 0sP;Pn?Li@TPT :al&0 sucu }Qs[TdMzրHmMR.8kyh@񑗇;k\aUZ&0sE0]exԔ9zeizĴ`_LpzZ-LjŃ؉QGr&i#F Zv8U6ɵ^X6p3Jt+&ZlMHnZGi tr)tQ9#$x&U* Qx fd<Ÿ)S?nko6Y2qTRu%'z5F" F=X5jvt3<95o%*IwgחL믯WWԟt$D'x \2>97~X/؅w:OFY&Q bΣx2\OŻAzs0gpxt~7{ZGpu+xuh\K95]<Y:_ݗ-F2$ _4f4\9rDV=< 6]ZH87 ٚt[UE[@jl QfKCk ]Ann0d$P#Sw''5ComGAF05>h  ;YJP^SBMDz!G'J$Hݭ(̈́>n3؆-H5it[quQ#x@_tZOa&h7w-0t S7^Wc#)jvSǬ^ƚ5۔B)=⠮.g9PHs]tܦ1I*L,," wC@FŢ8 >ưZ"ess<өWJlQцw_SRܹegPjɋx|npp.JA\:~+@g10@QA>?ZC*ԯW|;8.B/z&fIo\o~Ij :!r{eW1ςocJ9Sų>}70a8W򿀺\;<5h3o3[2Ε7^|}b&0GZz!}|R$ Wg$@U"r1-Dp^"xZmǨ {Xd`6ijt2YM&[{ulQI ѥKsC-˗JU8y(QpSX-+ n%a7@Nn =' 8]Y%PJ%@wTl7 X"YUVyw`qyzᅭ[^F.EWc5O߹F(M"RksX%2\ƞ L+HR Nd+s{ t ؊bļA T? z#JėU |Z[W ӕ89mbXFܩ\| +s]PVj\տ]}rv6AūV%jungNw"YCx\+;adLCGE0CD;ӆk9W*->(9xVzHS cp:t﵁/Ee~d2B0ïyT[Zpp n+Z!7[ae@9Urg9v}X[nmnn1[*\[͓baehr:]A20uPx V@u9:2T,mhʇ墋 -+-*uxeIU(]]xe-=ߗ\=wmGȿeu_Nwy͝pcwcy妓lk E=m#EMm`UpهRڊ5DҌ??@(J"g⸴Jf`hhsC|J_ }ڟfRNOSADo"0l^ƒd_?GeVo/t6e_l%Iݴc򏭵6[*. ͋3B?rr~+EIvI}'yZ.!𡨛?(R )9q+ywiMH`0$g.(Ne/g<˵F8|iW~Ϧ3o/S!X|/ܤ f*(~>cG_P6m| oͯwYh}*s="XMuK^ICt&S 6yJzt英ea8rz+}Wa~"γb"8߭z7/#աη_4}w;m VME:8.W˻^:[elHG8;ʏYnnއN?5gNͯVE64HIs;+-E{ |&tj1Ňɦk/k{G'.;'GF(C^٫>Uc ģ@<10ܾœNE=O]WtAal0Quz B~~~I U@ґ NgzOz0ɕ0:dLd %2Bέ!9qM9ώ'35~΃'0Jh Y%wd[0FH)a Yg1GKx 9F-S-ᯈ@d-S|s@dD{B=F JEù`T KmQ|hn9uEd E@ yܞ_sA~hTBU^ײ{3R |%8EQ%y$Aj*rA`_]W,ʽ+4jde}|3KQ &m-ܮL۴p5v\6xsS~åK'Q ,{{Y̋^9 .ljO_Ս}ňNY8!Y !MB9vRCoz`o&TM M Pf:ֹ}jwJ FqLZPN;*cڛr#^&kc78񒐜2DM447)'p0LN͐5;9= ЗU[mc.vWf!HIQhRJ4#\,U*8riҰYyW͚&1k򘒣ac+lB ŐwNF r^dVY=ըAN;\*d. 2g9mdI$drE@U;BV #\YOpBBXx&QTR#kx)$HCAUV*PD=vm@kDܴ62)˄j9pmj;|P"8\_Wi |mILӰ#2b$h!gqe5f*2wX[GtCPoiU=my-Hַ&2gc4n(qr&(Dɴ)TLvS&yW*1Xf\zxe3a,qYʝ˴rXQ!<**wTx4(ib"#C@3+ J4A]hJ r3Ctl- dAX(ҨTyWfql*p=9^-i)pcs\r rJ"n /t" P*ۊoZ}^ Z.x:&cxqQ&Y"EgYU%UAW"r(H3TRE ITPyao6,G'j~!]|w\]Cs$gibE73XZfn@8KD80 "=nFhDU }8r$_igxN5(7&"Fd`k:'1q:֚)ka3WxN|-;dh1Km*\}n@4s]c[M>ξXqd j{)|'Һ~wSt8SrZd΃.45y3{MWjwķd8w`3,\{шxWyʃ2["&~ craUNzdiղ lqc\4VLLu1"fet̉"T/LfȻ* ̣sn2Nޓ!*Rq 8- ru9c[- 1aJ~}#=94(FQbKy>>33X#'gѸ{FI?؇qfwUst4Rԛk?@GG|oD Am^4Xb[eF"GW+qN(w]E0z5zv1f6$P#FI"A-Z/>y8 y#%#cE ǯ:f^ƣhyW%MQZ cCxRsŵRyUDٰya:y@LZ'ΐIK 㩓9V[@]ˆǗz7Whij.Fk1gT1u79Y)6YSBi52 "]GJgXv7CMonu+PJ 1gia*tjӇk)aS !n)~CCG\+:HƊgv _uV/!Ap/Z\Kow~~ M㴕{FI: $r@$N1cZsSqh$Z׉f@{xg:"DN9 BM]Zw moI!čz1rd_\ގ&D^9 bh¢Rșt=<t,2*98?%ԀY׍wL7x\bkdZI`ZP@@EVQxDdeF(796d퓙.'<^}$T<(> @8"ah&K !n\b\ E_P<{bݸ̈*2>۴q(v=@޷7 ^qju%ߠh;muB4N{Hzzp1A8"bT:U'xqL9:C x𒓵~ } z΅36?W+Uwp_H>`G"H^de_R= /UcM,*Ha)̡ea@jayGmtCw{[EO;lDzu(ޏ  qAy4!Iu27P~HvǷ@Rf7Tm|qE?>ft3$%t66N3S|eJ.__ַr?]^…<~\KN 2oTTC:eCuժccs*@ӆJ(#J̨,m c\DA68٬XrAFYZDU?y|ZT9Aa>3Q=>p$1EgF1AѴ1aсĺp@V_[wӮ=C킌!]WSMM_Guve8U L^ON:^j+,KMmz1y֓~~3f0_>,'tGIY%4'a^,IeXFytZi3`U!؋D6t{&痫n.TK=Un\fDA^yK];<+Ŀk B 2F Vxoh2;mQ}MAP|{}0xyp8P\n1آYn<g q-eU ݼ_Z<+#>[e BN`c!craUN{ `=9ѹ!a8SNKy4N`yWm&Q'~x=+AK8FUDUJ) ڔxӃ yW33J|qk!Rd] Dʹ8#QsUwDQ<7U<)>MJjHKE"x$*i7_]F6{\vرr,:_o};Sf$h}rG<`FxG%Hb8b2VU2ظBaBQnQe׋҄"ЉfɜJ|SoEK_]j3IԌfr@ފRYʭRQZޟ erlivcl~%5)f f8˛ٜĔL֝R<ߞ kYfw^ؼ1!&^ =Nj=`9Gme"CޓsyY.Η}6B)tH,-,5Ϲ'kEaqaWqcsE#T'c3l4C1@%^[%Y!]ua=.bz-Z ~җɚ{Rc )n L188yAxAɄgF'3- ӭ^-{€\tI%0w@>*Il H(炩tu@ G\~&Fs'Ѿ_ODvJusݫ\/ʪ(5p)9K ickw +J<𳧰srp{XLUr^Eif=~:kA62#IY= ew"(F"m S0ފ2"H ny[ _zIOg5ӳePkr}).".]A47q_,FwK/T3:͂x`\xdߠomn A+G@s V A29K-3K-T߳SPFz5&zspFh<ȰH(KIpA(ʼn3[U!G~%WouuK<_ 4JMv_͔B>]ݎmNДtҍk,6RaDళAFpa1%NTl3~-sU:!ՠC$ŠK==h4RVCvO__rNjqGaiA#}Pdyn: ޖTʇvBXZJ=\2C0hp܊hbUƾe:!Be^~/?/ֻ]Z5c ';EZWٶ0iܟn=|6Ev:3ȊԭJf RϬvȂŐ" aorOs@ײy HsRԠ)4.BIղ>e\S]>܄^!"EnKg&lUDJD:OI/dZF=kB'ۡ ke!S.S#&G;9%]pu1QVaa2)sqߨ-}kz*H0q͢3T{JhٚYך1i9}')h ٷs)9>8{6AeAe XK1&oFE„n _ P,;s#| A#:fu#HA#L:Jx|\HEtH86:0w#枠/IiSA>i0AWYLb3N{ HiCKf5Zi>.a',џ{ap=C L`W],pFyј m~-˙N(R3#i&E %|K`fXg0QvؗݻU=,gj{^NuH8hר>0Q1Gru7ٌxe$XY,Yn5*(6gJ%J;e1kt؋#8+-pa:4 c>q_ /YK'eWXS]sqLY$^R])/]~1/=@[pkceFʹR8mGoϟgjXk)],:0Up 7Y|#p@L;pGM~Z͗ N_M W-3Ȫ*Mu׋s6",V@jRiIceEn,*G籐3T\}JezClo@]KiO*j"G۬s'^r[L3O˰CZrroռpAF5_-s6xDEP\ū,4xAGԩ h[l\$k5 HlH1H>Sh- D֢}N`KiI%0WS\m y&~N`R("8J a/w; =]rzPQ>+ #5b"ؖV Bےwv=Vl@]bۓ}v-P,FT 4-*29e< -zrӓz{= 0n'qU9d`nk$ Jt3HT):*Ssn E#dp/h3#+_tsw\SȧGNc>\zx+t{mfY^ZNױMKP s?Ӫ[v'2PkdQF / u^olpnΨnR%}Yh6;m ז`-,Lecр:ʹ-{ RZVows5Ҵ+WзKz5[lu 5?n)cȭ!t1x; (r) jf_|{;68XQMҗa9~/{Q '̔"$:RdKF_ \rJ8-Scmg=o| >lY?a b:Va9^<[I~SU&y=~R&~6\cM5q;wZ0__OU%&?_돪~f$smx9t03ewyd,'LzE= [ ?qȿb4;\8-wMp?ktBl0j'J}7V)1R۲\EYژeQ( s|YS끬(K@=]5@XiR76m\!b 1va!Wa1;oۋ5 sޒT\5y5ChM0Pc`,s4=?sTo`+i4% TDžs{K=j|5?_A1{)Wr'7fM37oGفI>`vB@:Վ;ԃ$].jfSߋ2/jtV *܉cZ22)I fĮ H^@<{~_hUM jd{dPAO i_AwK'%}Y0["u<|A[e:3H)r)`n~R?Ĉa0BbZ;?U@IxbbÓ*ƽ@%=A~^\kQIu3rWM||T?151'6NEpRYB2Sr2 ͽXEƹ򭽂;{-_w+hF\^-tϐީʺs̚ D;@wA *U7+f 9x.,:K:Y7XE]M`;ލ}cG|^u(djSp9h4+XN >/Co~ K% O& pfZݏ˚ӤxҨؖPmNKu~{M]P(&,acgpFJbNj_ ]9]czuBםbѱ\CdOqZ_藾S5|=G+BGېJ"\E.5qZTi.-&oۊ"H┢BxPVbn WFt̥i *yM臈L#7!];uk!Օ]:6{SC\k "?mD dAEW =NDCĤ6Kuj*@xW fQN$hԸݻ3Iѡvg,&]."^qudkĸ]5x:CEXSY;TFn׉w3u{_H*~ 58π^ Z3T!ٟĻz%NNy;H/K)ɒ"M* ;bs;vd|},t${~l)@ Q?@,3-^b)†h 42,lgzj[ՐⷛU(DjOogO?ztoMklVGM>"sh7WXgK9uY,׳|գf͌ڷ+@) T@lRd|ܒI:rSl`j5n< 3,>q*9XDK(֎+㷣o3l`D2YXQ0Mz;\FcW{7Ɵ8oϮM=Z&ջZhvF^a~7^c?s\-dV/nXǹOnh\{ @K|o4Faf i>-O1j܌i nv2o#Xy/naݏJ # Gg/xW|\:1 x9Z:QXE1$w9#Y sUdd/UL}/ `*xݍ$0ѥTM]hc-A"k Osl$ۏu\qR͞E R*b b>)M&ݧ}q@1>;ǍDlȨ%σ Uz}!(oeN&_OCwY!J,:5G{>a rݼ[ ux4a a4ņ~rࢅxqGfwƫGEӇetz]᦮7k7YW;>q,U5n~}Dkqv4%5П+citQn:4;(V_͏m+q:iS/V|)9|9͸jV:3?Pͣ'(4f2]AS 䟺IfPNǕ_$/!F ivXcw6:O@( t_-eCS ͪ8|dTkЏQKwE(ʜJgFsC mEe4ZwیYRL$5jϿOHۀ[\An`m0Yʪ=?$=@,wLMK=E?ЫT^nC=h঄Z&a7#irQRۂHyW8q)j#X9w6<䫓10i[JH;q3I).(5Njxp[/|.K*Eɽ6T5_)'hq1~w"ddji.no!s|uI"KD_Y"{=1ȥ:s|#eB {U*UPG.٣ jۨk6d +d4Uц\0TD4#Lm.tÌޕm_eM7#))!ʠi!]2)Ib0z]AWC V:.jM*ֲ6?:S\z‚,7?p%U -] b/f C(%+4OB^1&#h]B*: u0X%c=Btƒ9DJlj1@# qVQI:qxEPMU#p=!?0i$1 -t+0E;^#rd[9]_ln1i8\绢o@ݯ]>xgpfw8=<s][oc7+ cvp%@?,g$O; ^mŲH-fYֱy,8-fVŪjL 4ݬӃiH۞ y?|G]! ̮r61:<|.0Eh5=e i.1E|I#jR2V{Yޒ:m,'6v@"wu;_/ 7\Y`<]*QyXE!Mb h^Dv*k5†y*VW7xq*zk2%T9ܭ`H'ibgdD{Zob =Ī4.c2+a ^TM+x<{q3}jQ8iK9ɕI1dؐ<\cJbQ,LԢ9%0΢ZrEzP[T3 R Q(ѱ7a@?WL`~PAMckRU!l=P>*b8"CQaÆC|U4ȥ' 0H,ݧv}|O|9MyWߓzJz_.ryrx8Ul-5 )|uR;$x TDn02DŔLiϋ]07;OǴDvLw_&#Hx (]}$vuo-hLB3ccILD&$S#FkuNTIYZZDyr3shV[f&+]>QC% PN.9^."yר߄Faajr%oW&߂FMOL k8>LPQQVdM1xFwXz3iKV!/W=V?m|5V⒫^!߸vi-?(1_>_}ʀ~?H$|7bBE uKkU/?$x˳l']|\ 2.ofZj#'-qG&ڊ#gPZ3ᕡ'z:%=O{K+CƱ5%oY}??~c7؎/wp(-7E̳S@8$dT k([ ws|; 8'p,GNM@/z Ayiy?٪</`A@.ic>%_^=`taU~ӫd"ѿ.iVR ^!E>!=~i8/2,K !Gj7K{aLjh1{_^8)8S9=ÛO1kAYq~9;d_w-;1g-ayHHDaA[h NׂӿV>a:#V!Mu@p< &4ڻMQءz- ˀ{q,_5Eo :~j90gوRIъy{c@ 1|#L1Kq5C Uuh 9T kMrTVT[>W@S v%%,.hI"G .0bCFcp*hrqg_*)E܁#)MQF+-jEMqz["\o#JW=o`fQo5$ZFe4:jWJ*7BIm2kjU0 ݨ(7qm(}Y,cV]Y{\Vuh;39yTvz< eJO5l3Q}(B)kBZJipeT) ZFcp8?^tYk:%78r$ OPzmc>E}-2ǯB" a935#}e4GN;NH găiUr5^9- 7fmSh55!m3xd" 9^M!Obq#r $(m, j U_ú]iCÒ>eKup=22`0쒄aR UHre5gr^>3`U2C-DHa%L77 cht_Tc=[4s<|>vV^A;6Rk_( eHKo4!a 'Oc2ڂ*Zi];I! ]P\ smc(LWNp<5E41)%쐀ưT_d׬1xaJmb`$Pթ6ACOrd:=Dh[|RYRR!sd0mG2Z+ں9cP*D{xhɝY(qBٚ?~2.i\eZ25Rf?52' -P]Ʌ/wIk_1%q*˨P?}>;MZq2 fJ 9<"ыHH24%16LuklE>[|}|E?1 o>]~ZU~P5R_8 V/via-bV ;hhpT9\U|8Zsxh{%;dWZr [ `8 "B( ^]h ڶlpk}c\*xVE%Uw:mU0wi0n ,f1 bCM\`, #ܤ:-})* 6ʂHaDGIќYr3e:PKC9+Q&AXBܢ9MRmӐFšqH b>ud6uu&e>M'XRb1UMJb8n2I0S$l*Gy;l+~ bhz(!)Xi0*XH۪Dy{Orp9e4G("=<9d-v̭fv{lb2`lmR7 㩜8U}Ux l3aŰ h( xĵ.$`> 4БdꇷMoo{ .2ڂCpmuCߟF#cAks'/T(ʿv<8*7. 8i1.We+7b s 61g.qk$|0M*,,+3(>1r"慁&C }}{w񝙅kE52ڂC1>]T_]ŰQ}>Aly :>zWי[ń&_Vy kQ0tƸPِ"\c>W̆8$1~҆<EfSqi'>+1Vr"ZabZ!u[QPm?RHg` X$DrNXm0Fjijj{Nmm`y8)RY:d4׊ @=+K AŦƯ_qYh s\ |a%=ѺBtiMg4FH׿,]Y^1ΥdB4 L[LXIzS5?&%;j !z&xz>^jI(P\e1&1Ƃ]0lCFcp*x_[P,lzje.ښ^T@AatOsX+W-QS3pS͹jT(EIL2#IY Bٟ+;OeZaKLv|dn4"qQS[VTACFcp* )?1|ͯ!-M{(*8oh1CwA$lDD !$ _hHh A輠_r]r-1>>".ӋGyQhn#ȸ!18.H8G0ikCjB7|"zԬ*' F G~=Tw{SŢ73?fz03xb[W?~=AN.n濭c"j%qla>釛Ǻr.#.7K_^zL7 k' :C>ky_}f"1$(9 JTP9u*_埫zy\i@Y(Gcj:nmm-q~@v<ɲî_>Hxxg7?٧E6V/5 *q^K'^kYil^04sb2 gjy:yLNo9C8F,J#68<58)`1%h[Zُ„82l%ga|ӄ> JYr&`O0 xm'^ېg+/Ɉ4vnt Jӊx(Z3Z7z@:.ጂI EHaD FDoNs; t G"N1S:XZ&],iV(.DpCLRM>g\LIb61ΞXg)`9ۓJoڱN=2e+bZۿձ2wfPڕ9mnQXk~mQւXR +bL&q1`R'('f7KI轂I|.:y RG 7a?zrn؞9oĶvB@f&\ehM:3W |(,!KGAaIK?#`1G-pU1gHh}{/ rI#)0^}Λc.Zh|<*f Çɳi/{Vrb@D\QF&"!Jg$b1=`i7tQl39J9DŽRnb(\@ Q=ƿ'Y0.U}>D"_?̤̂6}[g?rƅ``GB05,|$,X 3p`=u7w((rA )r0 ,XF\b u/<ֽXuB7E,O94VD;W BIdEΨQ-wfׅ۴^=WNmߎA_L5X=[Y¿:-]"Hz|$}쬐 Q^9Xן{(F?CҥTa#M Dc6 Mp"Bm r E쬀gBAn& oG?±ȴs3`Tn;rg]C{_{]Ɏje(vP]~zZ!Ǚk.tJ⚮7ǨsRǢi)(Um* w8ǒKRDJ=Rd>ncca6t3;Ļ2{!>yS3S,uh\jw\{+l b6?Fxؔ7{sDb 'm ok׉7jMk.0_Y/>\^>40f"_oz}XՇ׭g&^גA f$ּ':gRUK}cZgLAsedikܥL2Pf5Øb]M)nUvK'oNZ:f?_"Ɓe/8ٗ/KgU`'F ,#aarDZB s:F˱5``=FHb@\l=yKK <Zl6y+CK;a2AZF+Ƴm!^س}˱TVϨ ]=d7#׈Ӈs[YFt;I!MXMِc8>8Rb )1CJ!%qp/f`9In83GzHآ`f[@7{aL<w7$@rwX x1LQ,y1/VQ,0 x܃ .4b|d'9(+͸a~X|q,MPڒqu"jv/eHVu^-+ub ?>0]/A}F3X {:(D\T KK}N ^А^dX E"Cz!Ȑ^dH/2y$%C jj5CZPfV3PfV3?x֓yp>ZC-!g>bj1JF.tb^9)*}/K[}:v#H <)Q2KlM3 O[zdA9?783#~2<io>fg mPAKGBqz?+ws*\{t e'yp?Vv=(=sv<<W 89gN}u% C1 5R6V!*fpc!4ʀxՁf'B?[?#`s*&WH`ۅ[8~}dޘE8DzX&T!<7fr߾g4>+"H}CAp'X:UȳC^q28ަG}^ZbK';+j3(Ti9 mTF rU$B z9Ce/V 7{DQ:B͞Sکu)Ġ'Z.PHXj3;iˬNYBbaJ1h$+q my9qq(VۇEUdkFu*Z<=E7j),]慐>sHr!H;PB#Lڊ;I1 ,%LC= ǿC~@GS$ c+_'BGO#d?]%OFh'aA"CcmRzH}A ~ĵ_уa"hQEʼnmpF?PxKEXvXE, Txda"F]Owtcϰ{˅t2[MmA!|Pɐj TF?MNd(2!cTb&$(9sls4{٢۔cf9;azl_2ѭ- @gbK(*DKa%.%CeB8ƈ*N*cj(0!H ȊdFbRbR&pFJ,2ʨ%Je[g7^h;o]~ 뻹=LH&quU.o'v~n nmݴه-lYϙz ηnr3>s2JȓEZ!7LPa%(D+DD#m( U.XĜ9$yCBO} 4S.E^'TVwHi3,Lf%JÂƂ8|B ٩_@~piZB+CAem(k6&!!`5s`/bXE )))c14R$ u ~,ED8rײu0M(18sb#bA Aa9i4aAHd{Fg[0Q$i-pоF&v_+X1hahFxd]EQ\TսJ gv|*B.2iq#\pˆõ@I|Q\x ri^p}͌̌u@Ydn)LYD9rfx|_BBE{"g4;<].ߘ?S[߮A.A?K0W9UO~YlT:.7se>Od:pYҖmA:K΋&;uKP{O {Z{z ލ[i [bYa٪$or{]Nr[*Ƹ5LRG|oEQ C0KoJkbŧb1kT4KϯR$}nG|cʖje❡Ԃ5^ FUYt!+(FQItTH'iy;'KPo{}u˛w]z\_/~ W8;j5{ N?{>]S:th_XS6a'+[EAYmG3XQhtQxK8e*GW9F~>QߔX8, R39ln-wIUli G_,3,5I[Ke:B,zG~R&Zh#ce(J_ $VJ(" FUW^d.){5:BiZbH S6S]{@՟mLG ?{WF ^tIy=t7`5&)bFXEt>XX<"#"Ȍc $Hq@| ^s2阓Yd9ENfYd9ŹA=~LHysT»tqݫG ܄o, +à zqe/5~|.}&M~ķDӴy~/~>dz>5/b/~vaxsQ' Oą'H[K;v#Tñ:zi2ֈgTJz^> PY" 9K*uf4"W/ E)[J*̀q1X1`(A`J$s,R`59Z *t4u/QQhSuP"4 1Rc^4iE^m)ZB9  p i0B`D$@yU!i$hZ aX*!a#!jH aOy~$u9v瀮 gO XR3 I{%)GR MAS(`Cvq5sd`FXϐLSPHBri{0NTSOYV%7iQ$3XR" YH #p'+%l:F+"%aJQ0v~TRm2h9r-h9rS*wUjmRbsdX5D\SlAH!=ovVRBmZMZ\C ?.oj  <"|riv͠៴~2 xoBwY3wq2W9^ߖ5z*䥄4u~TR]"4>d?|3MT/ $| Z{@gD_85W%=M{F d^MwQ0e)1ot5nBim 43$< .3Y/59%a9idH0^2i˓>}'I)kHa||}!w"y  noI]bT4Dh Oyב钰Ase6M&YCivu[ݫod\}2KJ:dj{!^1SZChmzDrdAޛ6#+a@yQ / EŒ`"b)穈7j<#"( -J))$<;~Mb7Vpq9|j8zTk&1 \}l $ɕjD?J&|` TVI/G,UQS^6N] 0k{' xlϞz@"j2ϑ ؋(*pDKa⍡2R+D'JC"hysZl$+\7Z& .'#F&h`p =Jj.Hh-+KSX,;Ja'ǭu>;X$dT=&yUd bVII([e&h$(WS<z.sTLr6@aPbsq#h-tPV΍& l4vŽqR*SyҎf+q`\Sϋ RN:^+'|`ɤZ-6 ZS^p/0Ĕ8uzz)`row,"nZdX9Ǡ=)"ty+K3<\9%P0 x;:maOE(XK kF%foޤ=tmZ<+ZSiejXW^__ܤEո3tނ.gn~Om+f`n]~h-ivkz:?L\>x}?~f>Z=/^o& GU^xsgzsvBCh}O -_=u7uC6w#n=H~I{G}m!#A{MrS*xK}xХ'u⓺—՟ܨmTY5wnA8QPDPZ`D*"f#y{|\pp+g&Ei| ft0zo^x+#Zh#2DhH[d50`^8kH茌*1,L-20ɾĒ*Z ; uO['?kcc*/:ݙµJ#ͯ7L:y $HHкd0s 1e5VϷmJۍ%&ǦS+sc xo<t(ۧLE6m PQ -9+R 3<{Gپ<ʾ}GwE}{]ao-V ;YJܣTp`quk"@q)#6ԚX#g[9F1ۄQj6/$8̭p` DFI)ŊpHIqYx''9 6GfAc-slQ9 6G(`slQ9 6G(l7m+Lԣ>գr~o^&_]pVտ7OA)sF嫻ψ<֙1x@799 sr\BDm ׸;,*ԇ5`N|_*VM'jfVkw:uwFO'|CoO^PExSW39Ksc0rNi\S zy /-dJ]<4p { 0zHGs 1LqC %Ga+k\Q G1--Wf r\}3Ẇ{p*A*P~'}Re,:uq?0)np0sH[ 2j)cL=S:e`k.PHbh՞2#,RǂSgUυS;5 OzƬ4[փL~^S Lϋ қmU3;4~Ae.qouQix*Rz V0/HIx9q*@v G 9 BcR)c)pвN0 _ў W wҴ^TnrԻV :n~|7?%رdfPM-nL"p7MczIg*p<fZM,jJĊ\R Ӭ!cJ˵3]=I*]OB#3];-v)2kJ׬3ݺ(=p-f_Z?GD ]ml?IGaxgSFT1*FQS޳[At .v)5x)#&,^C%6rCx|Ue#[<^ΧAFj#=%Ce$@H!RCń M2#/b|HXGaRbBS&Iʻ`v pQRk^jFWp*fО77ׯ{:]][ky%7W˴-YmӣMiWe=l&T7 F^H 4h,Z&ZY\2 l$x>(\~@(TxGDJNcz̧03,*͸`,a AyZG+%Cކꗩ&> 낰VaA aC}Gcd bVII([e&h$e? z.s*T S1Gٻ6% %C@>8wFV"L IVo /Q3H-q}VU(zi2ֈ JC'ZSM;FjBp!7(l)riBV#̕r!cA<p@ō& U8_yUo . ?eY)H0FkO=Mߚk-()ceQA7f2>-?JģfNRTqhe  zR Vz)٫ЗV iH&ijE yLA{SDzI J1u=3 ;Mg /zaJ`PUtiP& -,I*Ft9\ 9pg󐓳DAaTX.7Au ?Bx#K;VTKTq"/ҟ )-[ESK#'.5 !,>l\:V6*VT@[JYJOW_ߌσK=o)AᨙCWA:A˽iNY7{W'n1^^iza. Fo&fSqt(=.,|-ozïtSȡΚQ*7;z@8gN^㪦Wz\qWќΫ;R٨=+fb5JE1hgw[BP [0xM)D݅wvAyFx-Ō0/>*-X^*hbS}7e5Ma]&Ybapg*Mhf@)]^T(UO"ddC/+?e]z<2Z\JQm >˜ݥ,H?dGٝ/W+L+fv)̲27(u]^gf"ksRe}$Ƭe]W1$VWfiw{s=ecl7Y]^K)ɬhp)cRJQQ-=de!f\ivwU !M鴟6e풼mtk~_7^;N,- ɾ`ߥ lе[ $f&=x Id۞i)!U"/ &ڢjmϞ[G=X֬WmP߶pҺf]Wl~ײuAZ%65F] iC+\q0Z_E,O"*^;;5R,,mgk AJ0 )\yx z?¿pBfzM.V`B<G|0k5f,`ZFL&Z ih-ZKN~>K x+ɇ -,@Q3' qf%*xroDʵAǵ؋&_ld5qn[]َRUBec'3 %т pbrK%:y#32x3 [r8tՐ#Js+/tCڼliD/+k'+F,D'ZK(Ir yI"R:" vr;e_E.qVqa@YGf9AQNY L# Ҵ㱎^i@ QzHLdu4(&t-;N8Crڋ0Ej Hpm~doF.kSd뵸v\,dmz*>uy`r$y+Λ05#HG L@Q9眊zy#ќF8U.kUk#_hA/ѥG+V>U9V{3| RTࠉHmxc3CTqtd 3âҌ; f 0B'L,w_~, wA&c[ڄ;$dT3&yd bVII[e&h$u$?hjHH[K;s`8B]J=ֈ JC'tv1RTWIrrBa&l5\)GH0=&#$(Qh°YYo$W!zҸQke.,0YcݿDiւl00,:8<]f::§EG)(0> 1L_A DOJxpfC_Z(bن"L) TzX"%}RL]Loz^|l(Tp! ɧK U#c,QPz8]nc&/9d"E+sgT '"psނ.oL-W\S8O&**2yݚ|=YTG]|:0;Ab\Q>kKjm׌Dh ULx4Y,tb06-H^njNE]XH24->l\U6*VT@[JYJOW_ߌσKL7NjtnVd-[yfdFEb0lŨ;*LU~,837<&'go~6{>;{ 㟎N޽s{ؗYqЫCL=?{ N׿oukFߢkXms/F)fIwˌt4]Lr06ل,*5 _lՏn*I ߮dVM;yaa-1GḦ́ 8yz*;kã[%/dqXHg&eB!B# &4 c VY녳CdTqaiRgfW{o{} -CjhfYs[[aәqڙ{=z[yjr?Ƥ_` "NOؔrXºሟ%7ih 5(¿V8dTi5NBގ9DWGY'K<nqTQ:2,"'aREbHD#wDJ 0UbOC\)j nCn$R/,ߐN/ }(E/n;$I<붾ܛϣQ \Ic2lȑl족#:| r;"AŅTMѶ{R 6U{"ǐ`@֦+?ͯ'e[|!rFBB,[JL.0"~ طy/V7AmHQq*c80:M5RkvD D v8aZa"1G<iX{))H7Dd%0lipv@&NɽyuEi31Xa)餫c|@fJ>ۍ ƼRNx$9`nK<% 7JJ)V0z:my5)nk"Bjv7\"v˥>u@rə<) +4.%.n/A7K(i؟,5R;ӏ?i&\*Lp~hz~7oLSAnt>ƒ#4 mo:YM{0 byvoOoۛMw4|gGn_͢Ax37I7myﴼzywIB ìAkonx97Fio Slh}ژ%i.xwo~*L*L劢6P](+3g箐tZM+Il SMpL"K+>տKtw.PVYa g1mêde*J_Ւl4)OۭT$rVD("&ZxBs,x_n>bBdW.(qO͗TOQ1x4|šA  Ӱ'T"'uy />mݚfl&v=4Ӭ~+y K֒h~2#l'gHE_Ah-'Q5.7ϻF| ,X`T9F$㑅H>HԔ1тFQ4`pHn-t'.\*|~3}^NFsx:l9UN%We'bịUht hM̨3hQ9b0XJBDZi^+czNzM5CzKo=/#?PcJɑs-cǤ ث8WKHo\'&1T)b+t@HmOjRz\Tkn0φjtOFz^cH@oXtko%C(ڭbuy,%_OJIƩpcKQBV83PA+DE B6TC5-.z{ ?S0kϭyA:׌_!Jh-QLF lMm!;Bb9x`qn,u:a;s34ɪƪ fEu *Ez )g@\x} ( WTQv>?fڡBUƛ.6,0j Dלr7c/CRԶ טFEgO> IKB9U jK ^Ȍ ^`A Ɯ\ 0pבܵU@sq\*7ef=r3OG  h>Ͽ1}ƋBx<i^Aᡶ!fq(iXoJu++B+KE߂6Rv5N<1u'{ٵ,ƓӽF-,5kxiegl`,AV|Wʦ8't*IY{yBe L }'t5h)a z-  uu(kS{N \Σ5&991&c3Wˁ{ -ND-_?Jop16F@%7ov}eYոq>䫝z0}iyzMŕ&l$o"b_N]O[E>*ZOlU4^ѷD%DȲoy|%T8 ySKI VDpCK,s$kcR2|&n %m}Hw +Z%|s~N|1|Ygpc>cx633'fM`h{HXՆIТR6GNmېq/:H$vBՕƖ HZ$ZQ5Lv KQ=_߳[%G/sv"˻]ժ^HV%yG1r"$UaJqlLtL6؉MNh:Z_KU+cںYQġ;m)pBl hdV\<_"e.29cTg(9͙U diՏ\_Ͱ"&Y*/iX|١<V4ug[ <| ZBQ&Z #⍡2|QI`0HPcE6֞h9"a ,DMi*O1jveZ^jF3/l5s꠽~w\oe.nqP>lăJF.Z˽]=>\yd4 VN.kZF4|C+x.HГGz!bE%rL) &:|O=Oߚ7FP 5a<[׋ " N[%BW'_M0$8R0^w?8/@َ#b_4"; ݠORP;3y㻳Y|l(T ք5K U‹;]1ɧDAaHX|\#㇩g&/EpݘBod #߬~[WHyB"/sނ)Wl-r]om=qک.7fTYmC~̶|;+_O!F|}jav1j֋6&qVYm;$Iݵ^P_I^,%Gh{ݼ&7ng聯ww^]_ѶvGѴUq[ \EƝz[H94<6J `hn)1H)h)&_%\-oN~ִǙHb>>J{eb 9.AbMQ] N %pD؆V〝] |LM_RZZ9ΛW7/_ڐ^ǼR4Np0YR%Q@QRJb!8v70o;̻^.r7 kÕ+afˊ-\=<#=Ԩچb껇= JuJs4[gcB:ZCw׬F]" Q tBdwcʷn^cm"+|T!,~``?ǀ].MMP(Xwb`g^#vj v{ ~9s[\g!̦wCIFF- !.qu&¾:pAhP #]D=P?C(."jg328-s8w`]>~3l.; #C3R*]iJNgTU,h H|qo^#A.cZu hSW:@r^x,5’|QywQ!/،u]m#Awc NV'R!0q4iǔQ$Cd(v4} 6&+Xh8/7~B @rə<ׁ)L8;]t[0_೴^Z}է!D!e֠,zg՘IDk1hFs+%)*`~XuFҪ3Lm8Gy-f*XMWl(HaBݙL}*̓m41Sq=0R?fQ@|v". iՁ+*y:2ோHIs T×~ٻ߶rc@i1\)(|:ڒ+ɻI98Ns4ΓTy/;d =7/_~~M{譟syϓ5,f N>_.ko_Y4ATΆ5ׯ-ɳrEd.ٽKO^Al,KMj?;\-U $4Ḩym\\?bW :ڱ ƽ6It+<}[hZv<'[t7g^yQ^ug+''0Uh%OG!,grQPӟ[EE{JF鄘ʷGi<2A[DlR IdZLj^^`j ǫ֙F*ϲ͆@bC}r -nQ,@&dj"5>x})oapt#AZq.2EŜ|}l?B lEHihj[ Ws^(hzIGJ$iBxɱ!+P`2 Z)z (R1B]N0 @óSg4<0]j%"4*ZΣ q?!ZWup#'+k_C Hc31R(<]jsv6Gc ZI% ̺LkDsFtW<2_㾁ź>'ft&ևϣdzdLVö-)gJHٹ|0 CcZ)0'llc?B ;Wc,L X9h"ǽB xg^q2Ϟ1`ڇ?B i ^$g 9dM=Php:|^dʼ] h8-Wp!=Phމcp*(24o@-+)8 2b=!ZZ"}\af!+dc.FʾH M;h@T8F )ybBcJ{!ZwֶՙBi+- sxգPh^qغN@$L:RTΛ'?7XbB B)l>Z|0,JP*)$>o/T蜥%pflUruҢo:u%<]%g xͺ ,Ph^l5z5jMdg4)8*KC(4oykVorQF@k%\(VvltǙ4;(*ȏ@yG nKALJ 1kr*BJ= --ުM/.Gr#_t*9vVN*1yMkĬDT=>Ph^Λ|2K d.!dj>Z,[C9[朑Ly #xрӞx76#mⷷ7u}lwNp)b®V3şǭ AG}J3>xWC(03IZhXLF3,<%eL kuFs!my \Kй"L= - W€.9 dV/dJ1z~॰|DrfD \Dَ֕0BƴyP Bb( םfE^TznBx|H4e&s&'y|Μ@S@#/}̇Ph޶7 D͕eu)/a%hM ,}mPhމɃ^G"Y]C!²qE(= ''pUfڝW,~W6-L#Kb~C)OMh<۷P-y6 >m_<_LV *!du %lHSd=Nu(=&Wϑf%nu[j6u} s뽁-x\;[坾fcxxtB`w>+6;,BQڒl)uDi} PƼpFeʎ 7^,Wz_mrϔs@aXsЦ !0dL"ž@ -dgJ"dz֓$Yk5Fq>߁ۿj A9,TksAȍvrզ# 8.9']M|>崪[/wB{cTgoNNWaB^⠷5;*sI]- Y5>ƌZͻ-LGkJBu]>y=x}xt0TnyrיKa<)˱UflDt+^+ᖀ+%tt;fSL;D#XGjz7m/Nj} [].F9!6k;HmXq:J`M˨cq$Fe^Ax3O?ԑU'{Z$ۏo~8gޜ;={5Ѽ,Htp{t >OÛ^h4nZYдU-QQVwǻޒ2u"w]h#Gv ݆zGWLTtn{ħm/ $y+ri CuZ40dNllL*ц/mXK-MD;s[~fR)k%Y%:ɡW$!l c2f (޵#"ewa%>  Y W['$SllIVKEY!>*C{ni@ }BC1XYh{.3v3<ۭ"95^Ÿ_5_Krrص΁xUw=M+S={ G>240t߫d%ĹZ%O&,imH"hƾ6IS ^f_G~/u0*D`P3=RQ^JփpPPVPENw&ay^Xό'!\n'2$9(:Lf8LWp2OT3hP_~NW9 J;?nTRFokpMc<$b[?+Ox SD2X轐& ,a֑_9M~W8(:G0b';"zd9NJ!fKZu! /'Ov{5)74;|af6kf}ig~ʑy[Fpfa΂ݏn8a@c|.(S S< 7)J[g!uNj'omM!ppl9=x'U8nyz *_Z8i|3e>P;jV&]fa*FjprB悷y$49YìN*Ge44[iG$O~w3ڵIX$=H$k2>.H-ĜG@ei͉t"8%׍]ghCFzbvYWn[]vv;&BA6wo<3$5LB59q4tJܪ$"M"XPѵ'Gmwo6i~͑;.f ;Mͬڀf3}be4&I5A%`XDc^6`mBX@W$iA8Tp,g!b/iK1/PFedO|R1V1vE3`,ϱւ3bTi@ryͣ(@g)W*=uXϢeEZEZE1v;(rOzu[Wf_<,r< q`hʤV僉w?ň#r1"h9Fn Wmgugv{SAMno5^+2bهKg7݅Ӡ?&o\Ҭf  ]τk~hTa[Skx!uquvΠM]9$Gef8?hᎻ]XwR%gQTZInCӰjrU,;okב7lJe(^:|{P5rDrG/^ʢN ଶ #K">i qrQtcM?w@,;Vkו5;c_#YrF3%pIpJ+<`:}ZZ>Jfio=P\],R}8c9[v=>IzA(rlkX$K) ?,a.\c dR2J7{d/V皥\!Zf<͂bT!oQw/RKtbLl I jAK!0# ! s2)Ij:}镩Oѷϯ%Dv,4h1jLV6P}BRW%,9jÃu 0`$dn.h(u61,%J%>R/$abbG?pvoŃMN6Y7aϧ5]`̭Ѽu泓;e6>q3܋Znׇd];CϮw{kb_z<إ_4gV=[~z/s`**:? aJ0y_ a#Br9ͦTbT^u(Qs2Jp䵦VDCu~Gbǡe垈) 0Y˴&c6DL,Ihu`s 0$*Djv@́bc"|tB-ڱFW2JND֪8LFqA.i8O ngy[Sy2" q rX"NE]9up^ź\vU.g(FGτX3NX`"3F+BY^_'Aɖ8{>1 3_G+υub[fy{xJNDL?}Hh}$J,1eEB9qrQ[$bVWOO"s#8"%, pZ)h"T 5 {-E  &MhXb#VG#ր!.iOu(Eٜc[ņy8ӵĝuglbm弣R7)a:<:?E*hm߃ĤILˆ1J0 :UEzV|Z, Q׺N%F k]vii)7Oa}pA=^WJU:* x8WB }N+|+yikKtG78_\]{>P9|N>*<4~$p;2||uVֹkum8/yi-N;`H 4y@K=h/R y%<ϛE[[S]DD&5Mv_td:&x8ז+8P+CHErf -G72i־iR ^}(2"xrr_&d(V))!!1BIk 8l :%nU !DpsWWѵ'V)0gG㻨Ÿ7@v4| Y\F 3(.0N *$M&󐄽%lڔځeaIHeӂp L  ^B^HuwIu!=ب-gHccW1 Rq QpFL8MH.8sAsI{+f>=+eيc"vP,r&v ]8MB^bVxxc}m~%q3AX4"mA` fb*& ݫBt@Kw@Arn4F`֏hr{a>_;` i tpuB>]2>GH./g4t37~:-mryY}O U%^׳ٞ#`lYMY(!p$\SCLPQPbT K%(G3,'d 2-kcvTzҰghAݵPGKMv7Po=3[MYYy:`ͬ+B{pusb`UP1(*&cQ}W ]AȥC{0,mKظ.j&sU+wNVnm ZM_ٖ"-Wmum `»#s[FԹZf{lst[su <.o@#n 14s5Gǖ-qwöNY Jʾ^˾:-|h_ZmTPE5ux-pR`m&D]C׸g%I\#O9%xB3>fO+9RH$$q$`8Q6E$#ǏW?1F ]m8K̓v?pɼ[=g+^J3IFIUQe*QCd3-hbʋe(9%y&40DElVqi㹳jǘ2D*'2$/p6} h_,}G2\k!a/.>W9l=eG˽.*0S`&%$̖VF(L 669'x0eO^zw`( |IGMΤfPU-xdJ /#+|ۻ%8&@=C觭5E2$[z(YCR匃ҼU_UWW)!Z$L&BRSD2~9h ڣK@OBqT>Ĭ`J<0PIe9 ީFHZ00RUOnJDža'()rhǽe\D`)$ 80HgrwM{hu7D}6u![ZTYJ1QqB)D8>懝b nUGU$ۼZi#Q7Nap5K+/A$ԁ eSΔʟbkd}2c80q0e|` xvF^[kKrcɻTJ6/h"OI?)Ƌ}ջb0q1U~L5By*+j{:::\M7/ե/%qS׆V@Ȃ\C_IO|(f'0M~W]}Uw͍ Ǜ(ם9sOytC;`/vIT'MnQBnn ΖPYW3e;$1'|,Xdyk҃uY$M. C{.^PZ4j Ѵ>Gr}ݿ^oW 9b#BxW6bk9  L ]>7e'"ڴ/GWæ?䔐VΦ. CR *mQKH_ 8x۰.u>?'J"<&Jlu:'e J]1irTG>I`6uz > vM\q4= ;ɨحmpvclY෯z9>4"H)h~#dZqI jh;neCPB^H!f-<+܀CBG/YmF_fY7D{il:<ڀhK<+l*ki]e1fX?Y)|]O|s*{6; eL0 ZOkn,w2PV 8 ;F q++n OQ]C=4݅f鑺yd+@iD ;Y$ &Ol"`z6Y-*t(9ѼYCJa@;/johb7UQY`] a˼z2]4 ;a^֮K+@hwUf~@ݠC.o{uUt>cxv:wgM_|/o<fɖ.=h,"<G<M5izt(2i֌lp˩lvg[JboZnmD9"{4;m*e,ˆ^̦܇u: 2XXc tr࿎='/8Ag{Ce61x46כֿ1`FE2>I8otvϠ{A~v1g?\6WP;96MtTO4 "1 N5%?Kttt\s@izGFɜH1 >:S}^.uH/gPKkRZ)a*Dgyo#D<3!+  988ĵŨ+< sLm}n>(~ӎn-m4POs>ji|S}@;]#2Yd\J~IB6t'l@,$8%KJ JPZE+f W;|pgǶq>&%㧳Dw*ԞH02AAF7!HJXzIGx+t#bH~/lhDsiyGBI} dVo}i Nx80L~Mպo_}C m|`Ⱑ_wl2uI5w'LܸXM ǃh#e'{t]Zv__<y܇oN|9fwV&7սkw>q1w)'|/SQ2fE&M\.J^e"_"SFAhCJHPHO90w,1gU+kw ЙdepY1gqA[wvs8+]^, (٦hUveTej@khp09iqٮ$a7Q?|9׋E٪hE0 1s)/_OG.94sHPxl3 5 k$ `;Vl5Mmiso.%eQNJsdq`%MDs虿T)-{" L< Ԩs^jzzΏw^U˝ݪ wG@uUKL,Sѳ !]f9#xAjɺK*|>}^CZ@+V:Zj!ՙf]+0FUcI)%nù-V{~n;ݖZZwLN]A\/x>`fn[[w(P=ttcHA28` HƑJ2FHK45M76wPR8&!&;GmyEre_ӣT]< (w,Պ|oUL9iSdQi%*HGaR3au{ҥR$b|5:%ƼLR}ݤ(MRbQ$ bxΪw4g#ôq/c}'OMOPR8ӊcygFSApt51yݦ Rq3)b; wM2>t t4I=3H7=:RLݙLBn&S`ElW  N[fkrTatˆΧo]wY]& 2\ ca >k-| )H٢vxыit/H_qB/ARx`YKKKuFx\KO7 'A|35:ܗ:TTӝ[zK~bp2H9^꺂-w]8$ o?k3BGbƑ5ƪMÐanfY0 pлzr7fߝ[0~"FmZ5f4>{` a$vupgXTSwݜ3R7;qZa t}KKn7? y(ghR`quG@$?}O7_.Dx{7 2-/wVO!w?w;L?}%547*Њ\u br.a,+t}_E>qs>Ҭ0ldәpydھZh2_dᠬDA5:3md"1!56`KHߞ6!N;h@[3,=a(Ī]\FExi3x^:!=0ifǟz|1]HXCF+DAo6 'mI4 *A;G")'8!FB(( N1YD027j<۔OGEdQ07rbrX+'H2٦YO]/8*Y4sR$OG6}ƣ=C<uѪR1SehuF+үU¤p~+F!T8BbON LuZ&_`vwnebww0c )+q7Ea )iH Of^} ž_ۿg`Ép e&#QHYu2LwZ D佖豉hj4BZ"P"b-w&:ɪt]TV[S1mL).:Xҥ ♡&o &ɧ ]QQxqz!,Ѝ$Sl[)rLfmлRQdaLǷAwJ,.tN$8-~B.gzJp2C՝xM6GiX\J:mh=UH\ePh.@'k\ 7^IQ7Ӈkԙ_3p7|8yz)rNZ[Ok κč0nYF2@JyXT,ojg#,nZ9JE*R.8"اLDX,-,xGI4 GrM-,|j êjս-wyFX >IKCR l %eRh]ZxfXz; Vz;InԣDjnk7O&?bmhn xPLc\xVmP/LSL-sXR:|q<L }`+c$Wq.> ;HeXTĞ0-"TVʦjiZ;i&̎;'hpN[i<;m:Uv 4cR+jT2,㨊fÉRYXԀedbp!܉Q*xIR=9ˆ"O%>fY ,6 ! ;$w\{p\` \Kc5m8udÐRL{<$L9Q#4)&sPػT$!u69V.M8qA7!X}tQnYI]Rjr+$]t?O7?/:֝V7sbU +u΀yE/Fr)>XwimKLu&7o\ \f1BdŜ,ն 4O L(Uc%7iQh 5(o+L|WnHѩN)yH7zx>.Y)dTcB)7`Q( t۞ -5]HCLJ44;T:.oס3Ҿ פ }%!N%.M[_͟AÑCW,#BlhjƒSlvu<ofiqೞZLJp k"ΏzNCu:&eoYVƷ{ϚnCޜqی#z/!ꋖ[k6Lwk/Jw7ҷmb_>࡬t;`M;Y~8G +7‭iU"bK(F_3{9A O'Z`8M;/ՖOvҀUPynm:JXWO{b6zmT\-.2PC~.\gڥ&ׯH# P(XXn,u:a^u_ZoTmL).`FSG E" !""xb(t7$vwG8߆sګnɌB+jFl_詧w.i ]6W@www$Tf%1 B+ *Y5DKVF-VF%i4 `hRX^mDJA$FXN+혣1#eA(0}Ib `xξk :n2o},,mƕgK!,]ԶtϨeT^6A/hqDFyJAv*)(<:L"^KOQ!ٛ7!+ҡln{CeBcR0R:魶Y uI{ST)Ƥnqɓ|v6^gT/g>Ȧ,#mYڑv,dY-Mh} v`GPaJm%6P&mSdZJɡݎܟuYYEއW){[-Vt9\ -Bp-Y 5-'2F ڢU u>pKͤRUmp&˧Mq3F&B)Dsg^ 3KO[nk[d*<ip4 TblrNR *{Rl$|6|u|Bg "SJL!*Mg4*YB)4ZB.3(bpcMP9ˬ`Os:=Xu|H2eIf.L  9=D.@"9n"{h% `H >T㜅 -"j6i]98/qf3J=D GTp5LP{{Nr]z/kϮΊi?{) K 8ۧi&*:m 5nygE@ۼufQ $$6u!g@Ⱥ<]aD/V  hdޏPN99W9VUtÛj $M|6D+7!H 8fY1z0,7\=Ky0Jln%'Q1~Wo.h18N'A8kI=]uڻ2C8 g0bf>mtmFF6:ȶ^۞UqC*or9FrϷFPS)=x pNMgoϯGAs7˫Xó2B ܬJs4\C𨀔Dž\] H~}x^_>Oo>5L:' 0?5G ૮ݻVy\ּ˧n&|{.? rOP獘jGz*>S|oN\KLz틍c2gpIDe%'ܚ}@dF-(#E uŵܣzoÓ{ܥ<sqhMCϽ*=ՐzWFåZ8%.&bރ 2GEռ_ݬ^-~hf__99\&dbhAq.x뜡BD Dy &g F:PAV"\ xx>c6}[%t(rE2>ʤ94 2O}q\k UTN7pC4O,YN\X6:mU #NꀍHZ|~ w)~-NXeٖ;eɸiW$P;!靄BCh/kS^wPwNzbBq{'r8:.:K'm z~ h-uD6L%WLNX)<.(1{N a9g2Wb|}&ӋPPi)kN7ɇ[}WY+s\4dp5|,A]`^g&Z~m\/JOϸg~LݮMxX-MQ q4+QATHr*r%K)?4EK\E\S%> 9Бh\bS, CQ 4{uC>/THuR KQ*murP_1JC  0dΓ@ ʹDh iuxmΐ!T`/"ߓ mmo"!'&s%+tTfpNARC hoD]`SY,WZ 9RcO#Sp+mhAt1* PzbYggWO?>B1$GRR$8(F\^aǃd *@BɃAiFiHeTLA*=ǗB ށO Z%5R*€oq8^'0,.Zmd^Lao'vJcKz*"O|PƎhGgPt@o z]e=%6)&0PvV$r< 69PIcLt<9\MsTU!Kb8]W odZRXJg+Ii&>$p5H=+6◟΅,gSsRh:)DxYqќ7/Q\hA vb)s߭{WȑJA/py55ia)M4IZ*H(ERWhʌȈg_R يR9ٱESCOO"}.Y}6eېʉ8OO!ڑcOAW4T)Gԓ[Ҭ~ѿ [<_3u??gۧ|NX)S&O工^o:e\ 3_0mI?q>t~~TZɛ -}1\Em WHRkRxv}9m.TL>rUė""jK|.n/Gr:zn>Xm46,잮K+ܼd^f<ܛ~m;FOLB(Rkbx TS|Ѻo泼{i[vyԸ !Uϵa:zB xג)E 0-ؼJWnN30;\-/?<֎`\%5{ɳ+ޮ>肠 \!<]ֿZN(mOii_@Ogk ᗞ~q7tu-xk,"R;:9Lݱ{yB MA:z];z*}1k9o7r(8/KItf"quKXpi3C6ʆdx `=(0Eq䒰C٧dZ%G=)jWk|(l'+jC)+;¹Kal_~*ivz˧bxOC-* 6;صJ_>4=M1Jlz|Ɂ,VHw,r,Ro/DKʥ(0NayHPwG3jzEBfXYY>}ة:/My"izlyTlZH䟳?]MJ" ϩd)1 #1 x0EFAr)x:ǰ8.3.Ui-#@H(; aB510MqxLT2h-` ru$ 5y$ !b RRF) 8w #3qE&BLj'q e%!aBX5QYSƠ, (Qa Rh>"&Gr]~$L8aB3c=APwZ8킊`Z%JQFt$ $4QpLVxKb0.2 .F J ("Q$.Ҭ;"ͺH.Ҭ4"ͺH.lgdgtiA.Ҭ4"ͺH.Ҭ4QtY,:eXtY,,].ET |Ha|fS6zO牫֛?Tlq^PVӓ-lׁZ-! Tw݂eRN>/>W2Edn=.ZuuNU'BY&]\JB$9mW9\b ,Ȳa,j̴ej@gcOˆ>+Ljd<)V2""&ZH0<Z[>lm8=mqd:xISOܽtiXnA󹹟.J摥{`7-`^ Ux xN1= K5z"p`B*Nc3Z`(3*h2T Q9b0XJBhCe nm8%x#zrl-Q Oެ|uEwS,o+:<%mrh?Ώ|zsG^^9v~h:A_MRk8g/Ay}[,Чv/\dNq^go]=g8Ao`nd^"O> ЛXNW5Qv*05]M'7)7XH>0E4^JJ8%Rb%0li^+N@[V pɀRw[&тI°}D4/_cR[3 wvìˑwA2;|YwEzgN(^Y(6:x˫i K7^mdJ E&m< _0Ȗm:.ty ݧ0%i-Nm2-Nj8;_yHe(:;wf\x{!~ĽcD9Wzf{\msV 3,/ lx!$v0`@e,.3h ocJ-D͊.y/q¢NPf @':}lhv&CQtI {߀\>v0̿?UGedQకQI$? KVʽHu b]0)3Y`͇joӭWፕhwM[.s~gEtꢃ<s< ?S=ATC+O_s\ 2tQP9(S=t/B;a :43N3v (r ATPj#=%puR9T:TQ@( `Bp$\؄^)@8FҎY | \K0Kȳ!xki󋹢ԯG& \(H3j~+^}zja˲h.lt;\ڈ4h,i%5hM(كOpN8@xy gh|NHitL*" 3âҌ; f B' Lw/bp/ǂRT(&/w"!"К<З^pʾA*))eˣDǑLSABZ)#TñE"VCkiS)|Dki'H !ۜ`^bY֬u`azA?gs7Es|lbǙ6z  lg~0] l͑L{_# 1!L)(],l : u^>VÀJurTapˆ.{YAaTTX|\.cL6 .]Sx&P"E,ȇzY~7T1)$"ݵk ^Y6\ׄ'亜4yvGUleNeg~ȑ} 09vfA@YH: p%K8%;DDVlV+%FqT_~8&01ucNǓi?^7}Ht4=zyt8Lם;!F;Fm6ol)O{bϦObmF%ucw{VB^Hٌ1rދ{Z@Z{eˏ?N_OTw?uƗppR:qNC]K,Z/[eŖGE_>tyNΏHo?˻O5w#σr4o܄}S_?iukvS_wZ>}{_4~N…8G݈8`Xl@K 0&S݉}ǶN˛[P {AR@eB6J0F%#:0fIՎ&6|>|܂+^1<@)i2T&DMd uvdڀoף1f*+<D~*@g~KΠJ_̿[=W }63pIs uLb KUDJKu& /:=dND:>2ғw!EM,3,"M:CFJٹuHuP #qFza1lS6ildrP$)CdJѤhp8"n֜M4Ձrgu^ xPvS>}ZF뗰hm/4 j1ÿ[W,\kd#3ef(?Z_0jr2>`vtna!)cTZUBDHVd c{|Q3:WAֵw,,3`d*A8#OƔ TY9Bks͐!Vr ن¾X6m٘wRPɂ*Րb8(Ύ@*%niҺ=h {x2ET\YܼX+vsd=-d{_ʛ{5glȅrqp9!jBYIsq1ɏ"O<)/=o~tT,yhT2AsJPAIR("e2^NI9ic>6 aUsssٴ/iH Dl&"EG%G%r,E{=w;Qop tYA~3{:}鍧RXm=4W:unej[OIdlI.")6[<6_aYviYVg'6UtT  1t0 % Ot{@A= ]sD. .BQEPi/ٴs,>C(wBORr҆AVY 'jQpQf4ķscQUX&IQ"'ԥqެ9~t|\<2wH5ܒ 7>~✽}.4O޲`IC=Aγ<γ<γ<۾yr;϶Yvmvmvmvm:϶l;϶l;϶[[vmvmvmvd$vmvmvmvm~G@ٵ;]kw޵߻{~]7H){=_kw޵߻{~]kwJ=H]kw޵߻{~?Pɸr$r|͔c|ESs`OJd+7!Ee5i@Ω`W i;xğk!);M<󣫓bFv˱i d1[P51z&8LDiSV!8 @XH}8{?]N¡|TFGWyztՉʋn6wޝ^}azv5IŬm{qvRIdݳ^[&V{|`uۤI[Dxj-_i)gT!՞ ?]%$Wtp&oQdYYB!Q@E]TS>KY!#X]&m(c@TˬO2t)@ɁY]RY߈FG+@\+WhL7]SN` 䢫r^E'5(,RyTTNg@&Kkt͚cŀY 4>~:%:%:ߜNu݈/{۫jpyqFQo䈀LEITtA0?{Ƒ `|pE`~J\S$ál+W=3(CRPiWU詔 8њj1R`& ӄF+C FDx%M6 ;&Q}Lq~ce2p2 cf3>I]_'`be@?g37㳲~ 7[|M8FƠǹ'` 8׃@lEL"zX" SP.S3EW@^PJ`PUtnP& -,IvW!袻&rkGDAa4KX-3—Jw)dS DV۲cv9~8IW=S)hr^qOu1iK>F&ϲ;cKgUdbśetZEg(R};KFl4|=+ݯN/h10zZC4!2|.a4.`ŴGÛDmM0mnu>ȦV*i]ZC00a|<woo߼x[|@[޻ !ʄ%@E !UC2,s6ȹM1tdXD} -J)KC4Ollj ;[Ț }R|6D){S9Wsy wG1ۄjGztRNn|z̰ ZŁ,rmkF厡}nIPg:F[ws sDDAHwAJ4%gXx^s馬ϫwL_!-}7JxE6j-6oڡx\p}NKV[S2_og*ao0,äOuf3ǭGKk)Jmd\M9l?>SVρđ{9c$m|,}5ni#X_ٽGuE٥Fֲ"oΠrI~>?x|"r^0'6uMټ[O29< ksDKQT5;>fIu|Q ,RB>ee)RR(Ŝ(aH|6AV~p2U\LA/Cuk+Y Jl&1Csm^7aP }'kEǗ×%8s gv11fqh_<3cuLL1_'JZ15WNJuA[j۰y`w @'[mېqBy1v¬r$VL[ǷrsĔ|a II$U"wKϯaF'?ab w>7NJo-u-s `cԡT}&.8K-*2~7kL8EE&rMlhh U+> &V_{9_R)+d(l^n/գu)J$7/]i5x6xe:**C/?eu]_Hq3(haE8eo`]Lǟ)ia-eK5ׯ`YGH(򹖜N)bxЄxH tun~)z?uofeOˎU~"b!*)EXYT;t ]4<ջk([%#Px9 VB妚^﫟޾6߿f"|ꭙx'Juw2O_u ʖY>6.!I-bR9c6([8S &%5sQb=G~v -XA) Ut1B FDGw8hn\KlA)R c Y@Hm'A <$XeێkVwvH\HǔܡN6& >rױÞR=/W5Q+Gve*mc&>H\ =B?OIIq?tYz-ڕȏ¢{\na /X.j2߱c*YWW˰dr-xKnC6'SZQ6ri{߇H˲I.v5RWkhe`uO}_@|ᴽs`Nxʆ %}P[6zNmJ;HLJJ b< s8U,Nj3^e,:u~?0py,![n5@, ~ LHX/]:LLrK6^U($S1f4j|sWL#,8ZI}Y fVa(qpm}KG L;Ӷv;LPOkBz"^6RR ۅ(kcL0&;xMՋx-3u@oW@v}mž/̾vHy q~y.Kk\+Ͻ WQLm-#QD)[XP$dtY5wp$$yKj՗=܌WB"ȹhc9\b1RAXeʺLN53ڪ>swCߔn ~PA6w[7Cw}Ҝ첎Ļ%/B<], )""Ƶ'aSPεfPrӘGPƌAc,6ʌ ;mT 5i[v^>dzz/RO.$jXQ;zԬWt3iTRھf9īCZ#eU&p)rϣ219Y֝ n^=/x @w{͵ Tׯmn"ᕆӫҗHL rFs ʝTVi#zz!7LPPD{\"6Z΢S.-<_uxuk,0"Ko;"E RrՙEwA#AyhAGѹcIP@e2H]f݅;X$dT=&y\d bVII[e&h$u,䟂tT[.T S1Gc/4EL֬5"XbJ!#H & 9hr 0M(1jR8đ`T{L4GHPфaǃ#4gw0U-KɝgL}=>OAWIؠXcF70 f0bǣ|6W٦mF6:dSMcTchHa#axZƊc ٻ߶nd_z>dbmf/XF-Թ=,:"SgH7pjv S[yfj7|='н]$l6.!?aXPaQ$m6ʀj|9K7"uRTe@?!Gǃ0~KB7}ߕ_?7_ 77_h]ISr3 )h(AW7?jk޷xjW[Ys ^T!w'elURtwߵ'ܦ?m1˫V1w3H$ dLKeO8L=R`x:FFe ݆'mXhg`QcJ49hGEH:e%v<NCJ<_'JqL2G٣(u{u!8+?YR(iV0}dI%1Y[C"*FCf+M'JtBJ][L(Iy#֒u0jW5EnMb6䎋]qIYp}{.'&乗MǷmdCǑ ._C_ĝxrVQgg;zjOQ)"ufZywg(m 1 #u@>i01D6u2!c1Rkc\IM΋)ڜr.BmXݚq;J9.BSYB MoʮeG6$%mY/.>\ '\c?o+i(FB6%ʃFh2{̺, |,[95\v58mz#*f抜Wm\K^!5 ӥrÈ/&Xcfޒe2,g *ZI2.Q"AK&(72mH4\RuKZ/iǔg-aGޗqs_NeJNt?س=+(XɂҬzQrۊCFO~_.29ERÜ~CHޕ}ґbc"R.ǠEsqE!oK$Pa(g<}#-Ig!w/}v|褣8E^0/b;|yIvs\%4LtW-Ms^9(0+^nRnQYBIhRfRWYḋHIsV))2*AԐb{YNG,=-PF'f["DSwɋhQa覈6l}xF ӤA2{tY!W/O,XPwBc]Nzu(†aׅ EM o {<$&&-,3)po52C[(u&!L+'19919&L9z=qI]tc<{0?Mf́:r^ɇ8TI4 6*JݸDs1 " @v>I94Q-evw#ädBDH,зTŮGVybr>)GpO@> (]ϭ]׻{GVfG ڽN$2om?Xlvͬuţv;g&ι]v>qE+-jKF|[x;qty5Z>yʨ\bv=/ ⲯq'͹-\mfǷ qo겂.-Y\3 A7>*H u@~I7 'c9,tI9Z9xfeA:N|vITx*: ."ƪN(\dZFy>ru69܋rHàfӂ2WvҫMV/~`g``5_qJ+_-Eloơkt1ز[۷rsG(.;ڧr< bJhh sʘE"&2HKiQD\CQ\3^rIG'`[E{9{i.S&T{~#Co-8ł*sEj.2J#81rD@⟕p(3I)[UR2F'+Y5rV>~b¨>>"2#K9I}4zNg3)Gm"PS.?W{"mE02LY}/2̣P&!V5IK= a@7Ip1StA䠡WD鴵[Jp֏zq8`x⠋`T͋`w#̴Ay]Pi6C[❜-UpNЧ cVJ{i#B݌SSnBMj,r·x9!0TC+EPR(րz'K0tT;tTC*K7Ao&Ű@M>/{<.=iD8Y$&' 6qVыEO8^OK`֤ 4nHF4Z(dE2!0\'!`"fռ)# X1iKX6H J,2礒4QbL !x/B݃)5Rj$p@@eƹ `{! hvjsZ蜢9':dc;κlf#C@\(>۷odٟ^iI7h ∆žiHV t˘ʏǐIEg WXM˜$Z?'MG?˫HLj^=\p~GQ>^I^ny?{)k%e`P6^ l3Q$B꩐zj䮞 lc@Z%TР]JNYJI!ό牬't⸤E='.MS/"R9eؘxSxQMMxhY`)`D $X{E&/,qo6RJnK1[Gx. r>v#eS+ݮ=;ڍo{TNe||=-1~ᤀvy񷊓7@z=NW7%E)V"$sf\٭i,\#@?= Cl*Ϟ+ͳٓ j;.?]M[qT9$R2`r!>X6lA=rWz/^͜_̽^%>(D8'l&=:ozn;MrZx1IzD׭ۤq4WmT`/n~/X,}LߙތG|E/|6I3]倸qp{u3^Z=߭-c] mߵ5[n|10$SM#&MW-[( Uӹ÷o0"PPJL]Loipch&ӫd@:ϲe;e)SZ ^c(`moR8iu\{q=^sZ\gM{0+.VjSP\al][w*ۼɖg4G"EAdK*O}}ty2)ø'fs<-\z"[v'jUGwK(6Bۜ8XQ(<Hx"Q*zd)oG*?ΓLI=5j$_-Oa~3ǂ\ 2SfonewJPȴT\;%r~q£W&ɢ%SdЅ6}!J%5\e"`8hlAj6O6-7RLX5Q`#4V:'է2r,u|ox)gCBɚqΌ"D$s-=!)r:q$$vǘ]$D$̪p c1in" ) ,YGd`#HќY9%>}g[o)v GC -]9(0 Hml$ͨTB7H }" H XK] OR%r& *4fЁ fS.a-8 .$ElAŽYg.0Tڗ8~!3k,.4R{ҒΣr 'e0f' HU%S6Zs^:) #6YHe= ƃ5$u}-5wmYM~ȇA zڜȤb;{IĖdM8nnU9T'uox(RWN8: @k!2”4hi,31!XD-X<'aRlgYMxˮu1ɔS,x**3l{B6"7m^2@L@יW"P=vI)%(.X6-`qX F4ͮYED 򁵆fkU\vM8,3iUAŒ5P^k˚hrJCX$X\8Lt~Xeq) /. D,/x&7VTɇ\X:N~T} yrx*NEɜ/+II|'ӧ.}wTIV*r#d*,hb⮆,k-VX;KnCzz \Z<$ xҗU#D ^HAʀvo1!Z%tMކFL3uz5|< hwX@+f@ZNmJ%֔YK;m,tM39(kh ` eDAo 2Zdp7g=N !\W\pYT2N$Jhƚvib,oҮ3, f,yՠ6H*gUnKo"ೃZzו@̤I@ |^3~M:[P0sɺ6Zi̽;ϗvK|l"e|(90Eƕ,ԵW@7]pd3 L[ nk~)R(.v89EM8w&3tQ`mJSq'@󛉗  =OkstِCL*V5Ko&".ńE@ZĔ&0"tŒ&.hI56X~VW$<NeמU7o 0@5a[Fuۋ˳=NA:53(y6{o =V\fP!::6&mץkܛĞ<=}QZ;tǤN'xN'?/@ R}J =u"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R}J h)`h@y&PVjVR}J H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@z@)I (`G @fH 5*~)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@_CcREZq[7.U=psī~[Gx5b&RƗ]{ꜿX6=^xߖ'w嗼-^#\ g5Ny*!ܜ]sޅy wqZ; R!u'N]\6fp:*U~x5o:`*]jpU-(mq L5XNjpV(FJZQbLT1]+תʶqʚ g}&Κ/q2|MMפ+,Xi;o}EkM^\vQ=*,[)"*`EpYk֬#BHB(q57(Yt۫v؜kn\&f^5|5L!W n# l.oQq\yd8s5vD_yZ(՟6[;Qҥ7&#'BG:hM3ةeG)зYseӬM_.O)gÇEA, ހ:#=CI(e}Qtow펠}nG%􏓓Ցm/D {CO_0ι]Ho%W׮ދmL N!2ōɦC芜{]&unضt_bp$o}]O9c`SI!dgL) R(Y]a V AQ)d%K ̹{IaR;索t5mkae94CWR{5^H ~{g1+XD YKM))Њ[S \f@t{DLDmN5{1swY_@˳:"{hkS=y7]W|ԁ졉kD~Gn=x}^mTNpX+Y,1K) ^~cyChɐ<~ѿ3beP!r%t2=ōM(jiUw}B,+潭q}=zsl;xZK|g,G%"<'OL񃟴L3ŖoO[`V+/#U俛x |1? ]ȑXjG9Y-, 鉬TW }vxJJ=̹;D˛ո5Ua1ښ C+\Y.f[7*ܾ}㨇?JhMɳqR{]kw|S-}lI~fYF!gvİ5"qڀSɪ/CʺHm5)zr]7j]3cGǔ̹;3ގR$vB9ʅ /-jc;;o͗Mڼ8v <_] :<7 ddybB#7D@,=HIp̫K4{5cK'44`2F!EKdT{ވ> ǜBHSm뜱swƎE#%kw璵-emB=RUf )mA^ڸj,U’Ed>p!\u{>sņУGu|-)1ZXHHH{H;{i̓u'>şki\lR{ mVzJI;=jKR_fWC`G}7qI5P3_i03{V˳[37l? W홓ҀBhmW&oaO7i`nbx"P~8/AZXho#2|oyvbiYkH'͛V0q}h{*m-f#Y^,~`,|lolq}5Zyfmew䲰{ `x{ mG^(X(CLSK{zq줣tQ_um6naZ Ӷ djt xCZKZn tzzUm.%>=@ǖ^3 { Eq߮hwh6vi[s%LRrX.5,Pj7{pZtm~[qf%!4Cse8,jiTZ9mX'uU\}ASF7eE7'-]p7eXpTFF [A(`QXP F:Gb[M<299&h>̯'] wm7z–vͦN!sp1xƭ<#LnUuGU&g܀A@眊fWsDGIBс"zl*/"Db'!2_#[ ( 4I6SBQ邡a[nh]tW i#~Q/$`8&hXDA祲\"6Z΢rp90I`^r7Ԝk >")9IEdQa\afXTq`,a Ayn V }F 7}_P#MBFE8C@k@_z Z"cJJ@(-3A#(EgGaPG>T S1Gc/4ELR2 +H\srBa&l5\)GH0=&#HPфa7_s$lb\yCg.[\/\0F`>гY l5J=ygs7u~:ɣR"ihrhy8J)Y`րd)V0EtNKA)foNgsPWaY(\p! ) ;I6FtA*E0'13§C&P"E,B ͯ&g{|wjCM-1)r]Lms2%^,ͨ47֜Y<]1¯Uo7Ńꃗ]bՊ-28,זڮ\eyQ-wFBGbƑƅ~LNhxS/l9j06-HQI64WI5ƋQgBe$ ́}>5(l)囇>&-4S-ﲟ?WKoow@}x_ t %&yfW2XjXTPϻTIe8rQ8s%_~뗯޼N}qs 8j.^ˋo^9ؗ9hO7GQqw-y[CxT=VYW {\XSn 'ݕ/T"ҋB4 Fxi4 NocU|ݶ&uoP$#c% !i̊XT%R3aNY=* ~Pk/]q5BQ mSJM4 c VY녳CdTqaKNSS6TmW;x迴6俴I*{@%aﲓ߂O}ځ1[]pRtOJM1bӛf;u58r ~WQKjds eQ,@rVY8=Orr^:"<|bI>HGZShBxIBpۖhbݱ= `|q1`d.|0,=8 dn:3Q :[\5*w(18-~ >p:Db:xgɼ^ dYHcTp SaVK,%^6ϲهdUpmWToW IQN)93ZZDZĦΠB=r*t+nqX0Գl,{-ؗR<`s-:oʠa_El_, &dr3V669iF6ƒE ,=sBײB@3kxzq 0+eΟ #@3sk+u] u 8rt>`z0u#\? t#L &k)g??NF9\!r -%& ?}oc2CA H)TƁ1!hB .'8"lC񫕌_b:B9/{6P\6zhCfF>|&RNx$9`ۂM,IuR3 ޸^xzdwΪ> IV<\FZF5Y}G7 sLZ7WCx|V>M9$Qys`2@{?Y@]`)3*0eVsl2Tt!! )ۜuڇ2(/eZ  浌FMFS3߂gh>)jB:$f 2޵{=:Q夬R[luu x2/l\ҝ]ֈJZY%,QOL2vf@q1x _Rlb 4杺3;vm{e6*`=xKՂ*߃gt(coUo_ Yo}_f=19->lrwp}J;$zlH\UlpO XId=Ai j]*Y=NxW˔iӛ2әO쪅;Ebhs˙u< w^Q{EםܝНĝ" ΢(CPFgJ rTc`\c"bR+a\ ~%?nw:=V)n[T"bKR-2 0CJf/hlN5rv)VzKE6[V泖ivlִ*?4nң6P؎o0ڈk"fG5$op8мIP)) [-)A_m#}%gB1ib3*ZjI{!SdHh 1d[N)tBI &*P j4!13 &gN9-ʤCU'ΗA yDVrTPlpLץrƈY<"^M$G"0L2Lb(-N*KnkU3Y$X} [3!N#)C0*H禷Vy= 9BY֓v# JBSb 0ahI(R)BJ Y/ruFΞrֽ긼[wjRiu+yҋz X n,R=B"Ц|VR{Gj2FrEaއ_aK5ZD@JBܛZZn>qR֊tHD9AsJA9Sm+P͕XQ^q8:Uz5FJ4^Ǵ4:`kRxp!k kUh K GS$ BR{ikFhKER1l+*|e᫓d!RYl!ƝC1'q'ګXp\` \/[iuA:ZCC,{MzQX$/,¯r^!G,VwqZxooFxX |1"Ni~esh||v{d6+B㣗!,/{e%̳Jv,׳y|mFX-R`cA'!s]Ks9+M$<#ވlanT_?Cė(P*Ye-,Y|#9s'f=]mH)ebp9jMLpl@ʎ+<}YuoM(dPҢFRb3,qR \8^L หV=Nj:K[Q_M_Djdb_bSB+nXRQ^kgu ی5{qs>ImrT ٢9ynAz6#;^{vWY[ j59WYR=pV}{$r|+Z+o_t߷U x ~RsIoQ"szu#*a |߉}7b<>"NwʠqԌ&_{ ъ٣7˗-HM/`bKeJd܉Z%OOciE^v9a>a؍8Ga_^;yM.vqx[j^ U`tY/7tRnNFd{y( @)43|)m-b?Bv b%{ `z %t\`BBqe/7L[XPUS'n0'pz8m WdzqGs^uI=H8(WIx4?wVtvbG[fF\xqbc}{+ÉN |$-ϗ=|,Ap FÄw1b՘d&_ɼSUZ3օI}}-VniCnsȱ>ƼYp:᭔8}r&7Yfg`52桗r/:{($:Qz:fYd903E]@]7jy|>leEӇD lH4nQJr#/yPJ&\6 Hh<`Nj%;a!}'H?䮶+Tǫɐ,.upqD)D$dULZdiP'r$yQE S*SE -:T[o:ALB;CF>DxEU~-r ^y|9Xo> &SdJF0)q )s8mpVл#skNo48Y_301ZfT dc*[4-IH]$(wcPĸm/` +?Ȉ!1o3Қ:%җ4{5F2R:|r {!^z? nsKCH-B!!y#8#f)<"EosttLS{.,Ðd}+'<8Ƶ(iJȅ#H)N(`} Otٕ_K׃YZde_yaY|kِFYNa)6\>|6׫4҃|m| *Jgƀ-Z p3yH2i ,YQqK@ILxK&ήRrg~6Kat{6&xyFY|iKYE\fsAflHerۜAi5.l> b_T?~⍏cb`َ%>|m[۵$`<mW;J?+6Ϥ[gRzLۦۧw8:LRx'ZŊOƣB^w܎el˛6{5quV6RFĚ篓)I1 &yP9?|apWVO8e8ò3|j% K(N3TxI-25L/i3Rd!Z*v*^]_ӏOdj>rM4ų$(ܳIx>ѵSKʚwXrjyp^kz/cB4[C ;r?wGӶنviZKFx]8oz!6>D !/0锕qzHV:U6u:UOU'MVaq9[;o}M ~U_:KƂdzlTo UZLxkG(q&F$ `@= PGTCQd ^efYfdI*p$y4Xf:1,S:IJ(Oee! E̵𣏕O$ ]QJ)ݹ_d}/C(sGb!B}3\[(ruB<ghn>)p 0>Hw4f8^ ܝ ]$ޟ-;yO<>]6$)4LJ>%YXf.!qL2GsNz 4HG$#:c`1HI!GL*dmfҋ42O|&I1D)f@B&ךW1y $7FH--t 0t/KN]:}uە j|fm!֖ [vB$ncECm׎6-Y{KkǟZyig…(m9}F꤁b*` x:Im43%s1v5N]&:R7.Z0&AP]Y.mM)K (oK+&e}ޣaREǍJBKCX?<W&Iշv.잡V:MmOGh?vvFd? /W7+"nu5a]aw-.?#]ɎG_<PA=\ g0f=;GuŚ_^zhxyɰeU$ȭ_'6f7ׅ֜.\aJ_Bf]R\پtkTb]7s)Sa&D8UNt<)M'r2a`٤d}DP $)dIg"k3L"aU)^m9m5t69<wa1 KdSFB/w)x2y:to)@j7FT ]ԥӇׯ^[g"GWuPdТkDÃ3Vf;sVPu߈>o pg&-q@KVZI` gd F0&%$$\g2В S1נ IB10")ǹ Zr TzsLGekxyπm^nŧr bJh}B ).%FArm94[MrjCuIwrfG2 ]T GE/`-8PHynϲFȂlj':DqLBcU d,@ 희,g9Pκi᷃Q ЪdHf$'znC)ԧnAuv8rl/5E E}~e!Z6Sx~'( ()DyA\&HJV5W G}Im<32!`|$}@vE9 Y,7KuJY8T v`>L7YCV3 \1OkK%o%&.31+%ˈm7Ԕ;V+|-!c#S /=q4=smnKI4&,)8 Cx3)jrt@eÊ~]Ҵ_}zjtkIHeAB_l:&uo,, DbavɑM2x`>M?@xB]+e4-q㸑 j.etB#= MA4,# մURwKn.=)XA *Kw% G!36d(x.Cw)kL1 bǝs@yeHgY`q.KRfd֒ "Ykc!_ r:g΅N)7.i› ahB[Lho\tq38[CjjGԴ0LL`r;Ey׹ri^*HTNu9t rk B "Uz49l}K{}v=7"kZw'Tws^QKXyeL'Hd@\ sJfHV:Ƶ笁^KeeњylXޣfM \>`ZA4t&$l-o&ngpen¸Z,nL_TN"/Xu;r5g( `͓Y~*;V;ㄯqD=ˀkqx+F cQfc4w 8&MR"^r.EtQ2qsR ޗF[oDk%`peII5աdCbqj%!ّ۬/'WǞP[< 2iIYxi2&2ak'J̲EUPXJN5ȲK^rS╜OxLh-S- Jp%Uzr5٭ݤG~|Xa*?p>Ie* s=`6F4BE&"켍Ű#6kF(cGB[JPUPMPpBdɱ$Κ =8*TJ9JIkrb&K]1D\+v9EcvUI atYMвXȼ26Dn3A 0CtL k L>čN$ӂt"F3yӢ1id3 1L9H|8 , \ )R:/< *1.̧*}lZewRnc1z/0y.7'x[3'XU奔Q6hEK&R,P4XjQ U Q3\ajN7vCbIj~kxt t?8xٕcI0֣T`֞S?R'PR5gRuBJ]B>a@zmX5V jU`ߢ*'%1LUkc6FQkc6ƹz' Z*SZx6FQkc6FqVґ` B8n( 5pPBQB!^epRD{BPWzə6FQkc6FQ3kmZRJQkmZƨ1*6g^H-cf#b mxzͣAV^8o֣z\^4;'l\7@iJ镭]\_{O|s{3w+^̲]޿3/o+!>~=.JCªvL?~Sknd7n.0a Eq-M|o!vĶ˺{n%~/ 80N {Yh7诹xaܘA-LmsbB3>hs O/Z/@c$LNt@ˬHt|ʍ`:DUk6ӾLЮ&E?%tW\˱~񆨤ɕYEkד'5q܋џ q0M`->i$MU:Dc^t;uB1vdF,12 RAYdsfQe/ev\ %8Π !E0Re }dH3N /z mb:2}f 9>6|8L783?@vSFvirTLϼNZw%l@hWbt&%4eRD7,9$EWYl!6$qK48yT;b1@t eU(wx}T쪏v"ʠCo~54xj|=maRyO!#je6XSB>ǥD.2Kə!'3 )!&b&x7M[Rx&ҿt9?O|)T* YHI- I:%tHzH҃Q:3w[)H2s:6ET}V|JEe 1X+*c]W $@ 8.ocGo5dm$'O9Nb& nr !&ͪ}!wiV^O9cb廝,Pɞ>N+Ą 3g€BIFBPF1t\q3XSΨ)Ѭԗ֦ \hU&eDlLr$.bל3pT*v兦c^+/<(/\BT6+Alms&8w< ns,^ 7Dk XQrQE/]$9zu)Ag:`!Kچ$ @ D-L&uY$ ]2v̱;g7ô\ ;Nkʵ 5C37IJ0/W !.J&ML*.IC&ɐKҢIu4k 5 ģ)ĈT-ߵc3pvV/~O#v制ch+G f,gLIj+* em8$#E., - 1+oSV ζ6WJvvvX>?a2-i8 WiGnYxHv@2!4՝6+gK;c/HHR)XxCl'0\Ihc'?O.!-QIʟ>uak -r|iQxzzCZzkA|_nKvY a70h=jn-zMmFUP=z.vd\"J7nNntLI,T?I+wH V^nW;%QtA7倯N #by?&$L"1$2݆υhi 1l6lYdž 4 ٤e+W|2+aV<ai֕B7 ;&Z\~z0m@}O.rEGjbs?]BANA7mN˭+{nU㟟;4#cH*,i6n HrTl̄ ̏UݳӰdǜOn Z߭4x`w 4w -'1i?XR&34r<Ö-)^;݊J+u J_;8CsSzAՀ򡤨I8x;2t`ty%gE73y0@Ys"r( :D3c\pAK(%JEk=r1 r]U$7G1MHdMѽ?[Swemb+!gTERlUYɭȾyrT$J~{@fpHXEJY^=}|U`91 Kb( λjnWz,GrrqM@8&45l|fy\h';\.|kx%KNg57(4s4 (RϩRdP%ԟ!G_#|ӏo޾|⇗go>{-e4$sky+^[k՚|w(9 J"'0"D_Et)G6+!3jW틭u4<ݖV_d%V[vNJ@*EӐkGiý6,KӲϧc:qXH#wJ%#U^[& hPdI.sAzg"&Z Q zSR?퉃?w粿Jz3w<')nͯC[.o7[tv\ø[0_ Y  :p^K0B4 4Є G #g GrXvܭH!*>fV(ɘ!EҠ"AqIADh\'D>Xscc IƆ1 L$O9?:SpbƉ(khL%%.a3Cxlm"5Ḃnb/;[Q6>}??I{:1Xx-| 1 G@&ޤ(IoE|w"P"s\+ŨqVc,A\d ŮՁ* !f Q7#x|.eJŜQ1fߓW/N79 AY*}\@TzJQ(:,=: e݅c2~aJ^þ0(,1U!y%,x#5o8XCv9m9A&wZiIĭGVH*/`ֆ(m ڥє5!\{HȑM)tL' moց2});hTɿ^ V~.5L}jeZԽ\cdɕ 2(KR@P5WV's0nv;n-sr4pqgھ>[iqUHcH] DǙm$%LpBőp_8G½Ȑ7D"Pd4x!0lYF=pK}d^-i"ŚɮH&;HZ09B-UͩA NOljm95{B$W,z,͚XLMsN5X 4.wʼewޝhZ W_گZ6 Bx3&Ø@䴸}srtE`LvO52)@13tҕT V#;LhEgFZ`V| !ĜLF[Phc"1MrE(@ Z0*QYíA8a1rdq8M. ]};r@)|X x=:QH: lpfwt=w=}Ϋҝn].|^OO )curk]~6gvjۦy0x |"/ ꎎ.ܪwwVF%Hz5_47{sm}z7[χ =8nϥC6W-j?mŝVukhm3F?G[HęI@ZXh.Pu:cƞA cA牖4$OLq.H+$䤎GvØb p?ޖ΢X݈2c$EOqc~dȣhJm]*7ufrR1@XP)&XD&r-8C:PGWV-ƹ IOW[Ү0Hȩd!<@?Nkix!QX`oKܖFpL"'`!e.TwPJKP{ۛՐngٓώ/Y]-KzyHqQyK "dBɃAm$RXGeTLA(+ d_ =V ͼ2 q 23ǽN400Ya%q)PAt#[֨Hz/y8e&\ &gr 4@Se-7BkC59 "jT_H`復5`<$$)s{'%hUSDYB4$Tn+w"F9zz,Wl|0-J;jFK+ZWVI(≯Q=`lu Ql0wȯtM2 qpH&l7:GW]stu9:GWcūIb35rXQc9j,GvR娱5rXm5rXQc9j,G娾q{8H*ŝMLl<9_̲鴱8i`0P Ӛ7`ku! ;S:8q̝as.XD,N0kC ڥє5!IET;|ً).e 4IB81 @O5Zks Bx*]L9M)tLA 0qH[Lqg^QnKm)+1񟒻 ؂JB>@ G h2KX@Vc Tװ&,@ AݙvM8+`$G,@qFRrhD ſe}lpo6[H"(,7x!0lYF=pK}d^-i"Ţ6U!1438 IdRXfqT9* Y@,l#g͆2qj EG< =kR?Zf)o獇mwMOݵsn5 .A"W,z,N#iܽIqΩ FA W-üscK]}Ѵ>\VUa./f|[c8hFg~fE`F&e\4&\yTPjQ21qĠwCx팻F]cUE9#^P@B #-:XmR$vyPH!R F%S69k5gsJa|Xl>&wŸ Q?;ҕl[:ŪmsמrCtM&vRgƆs5޿Ŭsw>\EKVҦ9[\ w{}H1Sl`vuӵ=Nhe[g[6ز6|xw˘=z^i}?L--n >{-\ n88nZ|wsc[[s1E},73\osfгaMX""lKlZn6\VMbD?f9sF W/& ^+ 7g"GD{]RI'$yudy4^ܕp^Hؽ:2LWЫCDHV#Q~WhO|O<̀V'sHV1`HW^ZZ^xw'pfn_Fxk1F+on7^A KH!5ipIL09ubjY~0bEHiޅv7y_uoV|%Kr8fou&ǞM_lA_!Q EY'/UpY]vk. Dclic%9qaR~񚸅d]ژ~! /4KkYllic,:fy~vwN \5gOR6%~-] hB*psV̖:*Dί6gw Y;c K{߇;3J{r&֩4 _bC`\M `E4BV { %k"[Jcbݸ+rd7~HԔ1{0+Cʘt#l4ƥ _D\'h\ލz"vvt^7BˇΗ澮{q2Kv-rcceaTx xN1= K5z0E\+namvWc!hm 4wF *ۨQ1,%! jɍΛzrQbګZuq}E;#L q<%YVw (KA)ؗQSq̋:e-,xQ*w$D pxUMPTࠉh⍡Lhu*N"`aGj( ^ 슽`>w/kj/[egi#~ *B nAQPVRsFDk9UNX/98;|PWqu;"E Rr*N֙Ew@K8 (O _c? at u;`QКhg^pq"cJJ@(-3A#(EcW[ m-Bs* wF+Z#7=ٻM6un0i'Zm뫤Yt2>\z5(b..)ڗD鿿e%47m*Cӊu3ňrK[mW=nkE>L T#6|BK æzatH 2b_&X"jtH>ma$E"!cBj&l>rT9O6a]J|a^ nvnw{)H 9PhҖ&za*kpx*1LU@u:Yuarw =ѐ' hG7ìZ, }ۨT9@]/e_;/ D`|YMHi[!j#bƱ!J/W $a]Kei UJ\I8 IcLh@TtB)Ĩj'!9qTQ& E`AkZ uFNFv;4t)n?H `Z2%t] [אOdb㙘L,LŇbY_[uv1a#Bt]1N[=Lp?K L8\a%t)8.ݧ/_s4iP}id%#Xt t΁9-%7I̳nȾ;mxw4~f2q]2 ^R`B<UX-zg՘IDk1hFs+%٨mGy(#($p7'qѴx|FΔHh|P*f !*jb3$5IXPDg'آ]`FʩR ˽)?9E4\*vXBè%s9'{W/PW#4x(P+MM<9v2T+k[ޔG׿l)pfZy{Z*e=U);@$ )5RX<KZrY5P2zoEXP=)Q[M*FfdbN\n͘qr]3Յ2..<.Q6OGFI/V(FvǏc׀խÇ~o}^JJ8%Rb%0l!7idU@hJp4`Վ5uVܱ5I)eN >g;<N2Hf'`N0Wv֝axyyaja]WNZJ%5ܳ ?Lmd+B9Oaisr+.Nӭm\84{ mn1p͑Ih# 5 [޹|z:eQs#b#8O#%[k;{?{WF u{ ;GiT,hII(dXʌʊȈ 𞸝F̙)iF}6wf9MXl) u&:׮krsMGP^w;BW . dNqMS7%7L Ҟl$8ɨ KhM9Hjy4.pA^K`Ry zS, }:v 7Lk//owc%݃!Ne9ӗɵ'cZlM_RFӧ1e)D*)Ph{&7׃T/ZԒĀUXY-EaNelg>){۬ҳ]Oy .jSϩgwYc<)P BM܃ ?omTDɨh%y㟽޴=hrɢ|0~x F9{w0MfC V95.roW{O7nذOio Z㙾='6۫9^ 89ڥޘjBRzN]}SY.@NJS ,mPSR$*ɭ5=qV#ǽ\&}%92Ĺ~<ˑ2գۂf˶;+ʍnW/Txt6 쿘q$ WUS Ad+!uoT`pږLu˽YӓvlDz(O)JJ1"w36NH%UZ 2H02PlwCELqVҞՋ{W0R^&\9ouhVk^$nA[&KĻr`j#JZr08vH3ʣ"lt|_xq(B;+k-qO&I5WO6ұhiF|gy'xW-JceND$sJ+LGfLRq31JYY/@oFIj /;Yz1U|ڱ\Wtr') ỖHu'J~92E?aGnV.9`K }J+BDo kV9IՌ#JrCeLL2T=L0Rd=II8 QQBJ@Y(͵((g ua]:]xP](?6ol4~Yol>1ٞӋ?rnO^N0C7Z1O,CĂ2"F68a8bElPd*)YTcsÓepgYmr^:GML 6çD65v1r6kl?%-ZwEkNkwv+ J$ONe18.&Lp睤r)UIL&Շ H*CFdHz:j5HP #(|L2Uڕx)Ff}X;nY1E#]5)M;e@ 0tͨ=:XB.pthTb*(-7)8Iiʣ,e=Q (9ГfBs_AaXU{ԋj&\mu"#bV۳hۢqGKlgEcmhIl;NfmoΊ벷-I]<reX-ew⯵8/q XRV8O+4 ?9Yq-S]䁲 UOrU/ ..B\9XW\VoaחVxy;R}tf}}nq=߷QHs+ڵ~S aͤl( ̒.䦓V "qb4抋9LmԚ3ęx=7g +!7w* 7 YYYV~_ƈæ!,и3ޱ~, l55!`vy<\; m# V'X(ֶSx2QcZf'MD=ˤ}@13,dmK,ӳƣ+L:-&-4Sr; $e4MץVWt Y1rI-5IxØ9ܓ`mQL4K4,!ih1و9+epijT '763s *VQ[%Ybl4Zם`R>SV\&@Sכ6OJS9xө9Zn.ܪ ?E"xT}88XYԄYb]^ëRd! Nx q.ֻP猁iLkS0-NPD2VȀP 2Kj +5C!^}6o^aݪ!|Ię6@}$fג8G7ZsV&KpNBGa0tL6D*4D`\4*KR 's{ "ZE'r;A*+H;Uor^D߼7 ׽y~\ !LrH5!Ai?AF8z+d}tr` #vvM.pc l5n rU ֣5#ܗA@'5 7B:&;9L>e4rbpWG޵H}G/ƫ۷瓫KeX>|w hΘuh7-ډ\Ƅ{L{sCu~cŻe`51[#\4_\Nזۇ] ϷY/_bq0NLL[=9Ot4 kFi#JqOy ?ٿO=ܚeddlY5.fHX"utШzꜞh=Oo7! xx? e&Ff?n}cTd!j5UyT#ӯ{}B_~~|ˏ}Mߡ jRgO {Pj?o?Ss#O=[R"J>rü]W%=bB 7b =  | bN;,/khyC +wbJRpg!ǴsmD|J!ByfvD쯹 6|h\TѦ\#j9bL`72HBXTT.]w&f{TD?={lMK'S85Y[y 3AL'!bR9'L?qY3 錇HQ/B|P(wQr9w!v ;>*]pl'Wym0MM)E) >*(1 YR/O0"Wun{>k>ҽ>{ڗ$QC{k=w/y.Cp }NmT0ǓTg ک8j~w4W/ywP͟Ɠ LiZgIK\%a?iy\i[RdE+%2 9l]năق:n@ϙ},t~Mܵ>?A^wrS>DS#o{:4;~]bl4mdz,&jqv4kƟMg`O͈|.g;F531ko),ɬ޾6/'KQ''Lq_;O}%XCy<.b@g^7A+X)24llT1Y[ng=ZB;{Vl8_f7z oE{y_;tcρ[B+ڬM9m7,b[H,:ԫخU#Ʊ]}͝/7+Wޯ-nc1-t#v@^Q}tÍϟfS5Yy{ve.UsKx݇>Ei}Jѹwj6ؼ͌cu')A2,Ԥ"b!&$ A#A]z^#rQ8EJ *h IiP(/*&ZRr ZA:(IĈcY$MLȠj bZޒQ"'KAzglֽ} n,繵E`5Xq2C5{jJ/|]T{?L `6-@̀W 2Pioغ)-[ʡVi ^Rڕr顴:4k~T)uԓx;Q2Vv_|l PSe z)^ ?+׋"8_%/(Hv6%\*Aj]R(^JȒӶEjtL YL!c !H)2.q:-D5m[WlחroAy0}NBT)묋HFs*9’/*(2kqbdr;܆Q$Ndj{hulge,E] n0L?/Ih DT IuKE)]4%+r,GPͿ"Œ9ڶZ5Ckv3q~(* 5 ZD^FBa%[}DY'$\8r$,&Х |~)?w[gz h@}gPZȤlRgUrZAxq:ICtE1Rp>?uRL%DJ( "IԤ8:q"L'^)zZ,] 2QXdYylYwJAFNQHuaj;Ә'zLEv5>ӱ-80AzLB sN K[fb:d=Q<3%[nut!Z:kL^N'Nwa%M{r h4Z (Hj㞧6!({H,Vt0qU\8*=e 6:$w;]PKK QH+tWv䮪x8U`UV~ArEgwuJm90b]Dܗ8WݟxxJ' v1al},?7MK&GyHd {,k?s'z6ܽ&} aҒ~7 )TrnZ nzשRؕiE;?KaewϲjQwV_~xwDUUZ&"*o2)fw%*}Hk- )Zׁ>Et}_k%9İ־µVQW䮪NhPE軻R+tWhYO+0u?~rn/VӖw60&~Ryv|D !"7PJjtЀ2>H%[SLVk3$:^@:HYtFºlHIڿƋ,g,o_>g+l.-mpkt&A?yh[)VCXDT٘,#-L*0%- ٩DIDf̯JgCh@Qﻶf:=;a@w!휃:4O ߮9SKbsֆFJ4tm(XL♘ZlϙZtZGL-80]Q&.pE((D Hk*`cHk0Wiv)pZˀ$c3kTBs#r\O/$~]q}ږ/y@qU MQbB V=1d2RJhщGnu!jJTJ$ !xM2$+hr$K"]eo!`)Tn&"ule*3_Uh KȪreM[lh(h.tB9OFؘ7CXXrPmE!r}!/&I/)wV1_6f^BÁ.:GYش-DVSZ2`1.s eb)b$7XllcmWmSom-q$]rS%}:t+/c_<]2r:OfQj-jTfNw$mϢ/vO_]/ˋFW'Fgx3]g:m-^m eXA/uC2xBBZt D&H up} f]C^.G!n|Շ|6~ŋW-y[?ǟRMln~=x6:[ãPe]iϛzvf)+<[*]0ŕ;quiiRVl@;Ym> Ԫ-&0) cqC) ciAPª ea, FRNǧyr/k4D0ԼO. hl@#BB\!|!Z:`G:g %GmbYrIf(AlӥHE'g!t!,3,\q/ @Q5g}re ֈY77%0jP ?D Gzooh 81/kMfj#64!o/;-x+;XE{-d{KFA@#+KIFdkTЮV+$΂΁DuZhggsmԛJegNT:#Ù۩9O}͛5>g'Sw`j#JZj~8gBGVυۇݜ[Yyav|{Z+`ycfbc+6V_VS2 H6N EChQm_HR'O6K5xdGObd(sJhgiK]wDL|'zv</ʺ}sk_x3 *m`yO\1DYmT:p5ġ޳ EkYJ=P{#XQ)$Å - _;"t%q|W\c5ca$SSЗ3GG~%n">UmzʍO@yP*4Q[IĐ%T^;!eFv䢽RfZȑ"̗6a.3Y`0~ XL4%$gY~VK~DGm% Yͮǯb dr^R&wErpER?%d2蠌DM6'29FVUm hupz?$֠(3(m#WAeCE2BƳ' sqCpoq ':[/a#$ĂdCHkI%YeUTtJgU@ Jɠgga>汥d>FA@xdT` :'iN^TW6n{:pD%MP^Hxk@(#fp:xx9{'Ph.p]8[YQ83JS0|kvq`fe9 o>aPKLcf1T8֨EKj8J]Ue 9e~VzT'ܑ#Pѩ?%'5Rr 'a6Ë8>I^7ݽ0 *dyf%1tM9y y97oV/0uE8}׃*Šn#9;;_\9ZpAPKxP+rG+u"tAGWd'ޑ Zowk=X4/;"kTΫƷWw݅٧uZb.q ߾[̭7s{kh0Ѣ/߅jrG8~[?Rq$׏tiy7sY^'OZD3XfXxt͘Ë|mF|MnzV+=QgeO2R>Lۂ<(ꔟdiSsZ?fj߿E㗓enߕ]]B~°Dڬʀh2Xh~Je[ /ߑ}^}_~ǫo}N<[2'Md= )0 ?D1}Ç4rhvSw{\TG0_k-= U!N)1 v5ᶞ# jf!y p" vJ6౬iN.8D)[ I Ն'_mXjǴOE垛:\3 P\Lbt1&1@YI=d\SeSgw1]hwVE.p,gCvj7*qWn)Au4/ QyǿF[&M@5k=|&q~>]ׄ@=axإ55b}3P sj"?QhB:ScCB܋GH㪼h0O*4 */#Yf.LLE ꪨ"DpE=b"dLb\)"\Cu4ȫ:U*`h'G+bݗFHޅY*yQǬsK+AB +hFA,6ѝ C|ˊY/J7$$}ּ|Jp@IHZ:GZ)Zη\݂om]pBۗDzPOZJZ8 Ex>nC6]j6{E܉'xv.{;z*dd4魏L+*L+r@aFmY{mHm5#ی֛c9ƪ8S^YE]ʒ R=)(RrVb2sv5c5rvkzX.B[Y^>.\QTm~Uz79f3򠭟E&FF+Y1!7Hg B,$8a!(_AWQPitJ'i(^HEmJchT'ɷcgH_Sf`]Zq2N\cոXv`1Rqy9dAd%~\ZdAsBqd\ܥ00 ^<}X;Dv}>|Jl|P>a<滃g~tŠя9f:u4uh\!SЍ}=>RУ96ڨRqI 4(5۷E!ޫӊ'˗ؤ}wi0&06K ̫Oԙ}"cje!a [i}˫.g4d#m i4#{_M.|sKrQ *KS2Z,$/"`{ W,XP&kw:6aXRf 6qbGz@Ks vټS*2օG_n647ۨ_@PԬqx<᭴pRam ǝ):eGhe64}4Qw>K1`/+'{z4<,02yT;o| ¦UGsS(^jaC[u, ,Ne- 4_E({]p⛔4*nbRA!9ReHQ{`t{%Bd"3D#596Q[,qz's(R |d .r1q 4Co2%O&("0g k0kUECZ|Le,Lc} xyUyu{מمRrSCG?.5@ZK:aVxp`8!~SI@}5ŃO4?èq]u;Jeo|,\&2 a"Y*rZ;웳> ;P+$1JЮ]ูb9bWF|{ns ~\\j],:?Ԟ_|Q˻}j:- 0xi(O4(u~ ɀqp=HN>^:{W_}u'gJ͂[uЄoR R&['t6ٙ>B6D'>AĺLSVZ -J`lQ8]AkR2֦4hQ2cKg ,VmU#$Q&GPJ]y!@9QH\?ֳ9(A:;n7/Ìk:BАMٝ^ϷZ ^M7 }!ANKTb^Z{boUߝf-p" "bb"Yzs# *. #qq .JdFby:B(74&L&)'kd,wFsoZI[z)3Ʌ`ed r+Aa"Jp'X" i=Ԫ`>OZ qfDmH)GLޒ5NZq9ϝ JpeGqTxbXYfl쪿 n 3mGn'h͒k+hnwrvV4p2sRK[jM9A/hmVc}OWe`ú) S`/q"s\n#4 k^< U]jtaWbX!6&Mdu k*K7ʵ@-ظg(U=dR4'đΚ0yiFcBJ:(FhX:s6bG=.'v!bZP;² (TܹQ4QlJ]NQ@ Zs!HuzI!h x9Yh韰D̵V#g7ÞV7>}~!1j!6;`} X|4s祜|~F)ZIq̸FaM"43azݟm=AAz2 Yy,(h腈!K c2p@93,$UN;~侅%ZX)盌c:4xj 3?ꧫ RRd +$X cVpĽJ)SA1CLz#zdwӴbn7y,che۫b7޿y&+q5ORȳ.Ӹ5wqF,臕HuҦ@Sq_CkCrhm-*Zɉ?^tb+ 9<3!38 f*)FUvyt_ K>ι+ԑFofRHhlݐZ(-ĿbۈGD兏^{sBIjMR1S %K,``:[flYr^~8~wq~u6Ax~X=8]|[t!,| }c@@gevCUx'g]+ ÍVY* +ڜ*/I_[gV$.I,ƹ-r,zr>O&k[|\yȶݹ1 gǷ$7zM%$0lڧɇv%/_t\,) :kP[ m  "`R/OM&7SrZy":E?؜3Pv^9\q^jm"' dC 8m7Ǭp_jB՜4]_R5mBN9K !2[#nQ`k)K ^SO4mcH"ZɧKhdA j_h29 &. 鱆"EbPT,@AH}RIa5,U`xd ,(o0(-qEe j /t1r7]O(^Yg< =5=\f0ge1Af|p~JU7Ɣ.-s*9z3^?pjU?r/G'y Bq$VgGsBPgNtO t_y5"6t^HKqO(uUGUh|QŗҾ_M0S`Lv jpt-^k_B^]hX8ʪ^dzjC.hUuh%Ǽ|5Ϧ{?LփYz"~}cˋ$f c!xln!\Mx2鲂c/'Ư#isHQmz;MeyVi/o~hgZ -C˧Oq-yŸ;57p2Bz_bKYM$|FV PϮgtM#]gTI{{E8IhaR,@uOJvlkm]6߁O6ݝ>[4_aB*3;~mN(ta0g/yuJ!u=)bCuwd,gJP^8ȅ@Ps":j%B5I=/R$;YsQh:!O}ss{XBa҂<1T#Pb:M֩侱Nu>HS$ eXbl[ 镯e9XS AmLI8VCйa?h-#1G#8K&7V9]H@l}=GKd28cO'-\P5JZRsqLL#S`{Ԣ9ScCm&}>w92IfɃn-<<}D@|yJg0)*o*BksI90d,;PL0و) \L.X(D)2ZJܫyT =|HٗQPJYvZ#c3q6#c; iC5BcNV~ʌW$}eJ4]LWOON&/JUL7T﬊"(f,l%s u "hHL lky(:{ `]:Z Ei2TTDrQG+q6#v soJ(Ҏ}Amףv`׆ R0^WUJN\v٥U0jTE #LCff(5{::ƚL"YE'đ} 6UO[m&fm3/glZF : iU*gVBV: {J;@'uR1n&c_ud\.r^g3-y .*C=.xe4,"ŬY[JhoISY wʹcOp agq  RWG?N>oT'XV/D(H-r[m\v~gv[eU^Z:'-ڒa_]EQ@`mU\VY*#N\{`+BWBfvguFFW-1 4A#mXt teV"K(A( 6ED" D9*Y*f;GsJa}gc+ pwց\*tD|55yƟЄ G\-0kN*JDhj(y&yǞAO$=K :BTP{%U/ig{Xi3bn?ZJ6 Eg7u@F|W7 `8t[I &0%2Vj@"U:A@tN{V0~td7i^}ufٯ;b"r ѥ b*~W50ýFL7{p(ջ^_v}]^*_bFsAa׊I&`:Z}kw+יY>zzBm(X/KNJw1]tY,Ye)1vv֭ԷYFwmq8u[(R}` aiXxw_n.;.WŖʎq9uL#%,cwl_~]$w,h(? [X4tq턒}&C^=_|n&Sv6,͸M>?4_fb2ř/E-N|ӳ˹}ǿ`6|6Ie7kUi7YW^LϿ]N(_='=7nC|%|^pfLՁ7B9"=$#A~~O^cTe=ydyXQr7sE IQb-Tv i}RJ1Թì֠uX~YZ+q0C@J&s-)2a]M]+tL^,8:ru nu 㱻pGvH<~6s를2G0aI" €T*^h_ ho (W4#Ffxb1(HHRQHgJe|vd8m%~Fq/Vbsş d.oz, 18ceo YURb bG%kL9""8 $hH14.kݫv1ٶφKSQ-; m#9P|c{|3av"u`|**a 2j ?-&^psGÒFyʻU `3BI1z !$+n!YP%T`eK ].:2G9 (Fj X3ζ8;m[t>"Hﮦ_/O6f~];r6[ی}vφ nv˭^x[B#䃢ͩt"4 "|bW =E8mW nw\4Vlo7 y^yʇ-n'] ^ޟvnROtRE2 I MA t{A9 Qn"HGepԅXSTi rVꋶu '띪:8JW9-5(tp :sś Q5zW&1Ig㵒%OһK'ә]~-Ů>],x~+N(~g0L5fb^=^ƪrws n8މh7`LH c1 t.Eh9.sG ڽw*DƬ*MRV2FUeuŀZ{DT4Ow&Y}ĘVZFȇlh}$)1?(2’Z=wvg;L/[IzmϨ"wI!Qt jTLQ j\J2 8.ZZ}pɸHWe01vJ[ PJf$o0BKmǷ(-˜ԛ8;n~ADd ZM,jOmUbH`|Jj,-|m̄ 6P> &tB],gҤ".&p%[Nٖ#*d锳5Uɡ 6D*qIhM q,\hbIPYZOrJ X4d-Hȶ +FrCǎHǦ% T:|g=I#_4P&W|]ls>LngM~nyld_-ޞgP#yXca'W>|w]> uVj _Ԗ*57zFAW: Ձ R֨;4/7Kl:켑,5x#BHD|<Ӳxr/SNGH{>4{4` PQGV1pz`!0CV{Wꔃ8i8$  F)ZASKh;YyO 7θW .f~`g`ןo, f;/n?tփ١L. F 0cX{ޠF}z.;ٽSOԏQou F}fzE!C >D b@HFejNdnOէ^ Xq숱 2؆i7%%K;ہ`dvuZ]|=V018eE䟌AWݠR|ǃXCvƔR2[#k:jWJ ]1u9U#zFguN!t%]Ts HTPި@\&Iר|5nX4!_i L^LO,~سn_[.n,kָO_G"?],k1QGSy4}Vʜb E(h$oNRKJAVOM WN&buIZXNK[ I)z]Y@qTy:bk>Ķ{L1WȲgd;Fo?x VyHЇͽ1iI&Ǥ4p\8r57 ak6v}5 }r4,X첿fgy>B6h";$NM?ۯR2rL!I1T7~9fܟNNg> g9r)) 2U? b'gO H4GS>~70VD~8JunO|ͯˏ'Rq0a`lƈ:h6Xk4ZoONm2Z"()k4 ^F' y5Ep+.&vDl15,]9,˵fcOSvDж7jNYKRe˯*di Ӿ,ΩXOօ4W,ێI+n9W=s/RjT]feL%O3d#],*TK\ʰWR^dK4!eGmm댉콃zMlW^&&*>VT}D؍uLQm6'fXpFsz[jָ=Uk C>ǘ]V;$soPH"&W!+*DXKBc j'8iON~{lR |!=`?s5-{HlsR04FB2'n(IYkj~n }~ONn_VOd4?ϖNѪ1O8G R:D[a>O 󥥢^u"U>YN;u1&$锤 7N߹ϮRIt*!XkPdO=V`lUp m~'8aRR>R{T|~#vAjQ8iPDTAQ@Gh({4p6ڗ╊"Ơ )#%HE%#*a h*3Re )>[J[QT2䄪mAP7q /U4=hcݱ +R۴%]i@R}ק"!'s $JJCȘD9*dzZDG$ )搆Qz{%Mn!Xd20jn[ #9P|%QQ6qѾ &Nۤ\?n3W']UW͑m-DRX\ѐ&i>czacI*kZcT&*5.XA30%zLN'(v 鞭فV!{FZZSΫD77}|hjs܁9 $^2puL.(,c.X]եQǘ#qGDuPO! ݲv^g dT6IYgښȍ_a%'#hj6&ݗxSyqpiH)RKR4]L4(yTe[ =3nt7DxPjc viQ|Z$,ؚC`=X)KQ2s0RhTv&PY]g7KY8 S~ʼnwY*!jTʙ %c5y DF$sgeKa/?UO=$+]2IDmi50\ۀ2JVl,ًJ;5-e,%U=h,$YZ5O* $Debr%pͮZ{&< nc+eL$.& K"-EuHD)gK+ʡ,m4v*FSw/3߹DcdaRLC,$VrJ  %v02K3Ʀj"¾2mkq3yV  ÁDmtK5Zkt3X3\E rMrNC^`5ӇnC n..w}|8?mo^sJՅV)z<ǍUoxgqm5+`7+Xioߎipv&gzA-4:?3gj; ׿9y~۫_?'-8gHecX#ܽ y`J[>:^턺nw kkB/jiSiOR9)}4J1hEG"Kq&j]69=: iglH$|MLIhuprUxV xG LXD'9K6]4@v YCY(YY[50nkym<5-}xs0c0TyHy͛٫. th 6-jn׮N=z&jЫԠWq^J zJu֠W)kпt0ʸ*NB:Ǽl"sY~͹:bmpJ mGRzo (udYoAudl"EJ%^ʗlL4JJ)ad[Z/Zgg:-DQZEȈ\jS4[KR(HQ(8 $\E Q:ɼRS}*@.;᧥_v{WS't>&͏\i\5M$l:d&bae5VTwLEt0;ynNBPQ&/MKtpM .xK ZLB,mIm|:goYɼl˭_{'Wg V2PsZIg R],t/ ^~كB7c1U(.tZ>]tGWӸ:? GtBa$joG}OlL=m\hcMqrg{b/zWo^]OYE]N^.8@I^y[AXvFt&HQ,*X;;xE.H)2Ew=-zޗ-g<~S>XѢ ]],2+7V(/|(1]0K@Y `N")k Sm8MbYjLS4`RLI/贈ôF}q2-u|ve.h\b : U%gFW&ӓEFp@D+տʊ[nYSxqxl$w~zÿ~j>~?{3R>,?Xц4|٧  ={.꠵ţ6Ћ+4%cRt1fL1DgxpN':3>P} ~k3hw 3{o^]M={?diu4su뼪{L0*jY{tWa+^啣:b)nH&g'YZ* *( 2OII+R  hX-( n]ŎO ʢhE-)d9*:f2B7z'Oun١tԢ.25'/x2 VE됯GO k0R3 RzTik3TB=CÞ!eOF|pOB9¨[5/]&B4I jt)ҲL|! }:J8{'Yj% eQˬP=D) :*Z1gv*d:u-Q BS.:2 1ZTEmP %IL;.q {5/ {K`:~9Clw!G#X$Tm&LI bSk> tjvwLA%HG6sZb49Z+\P;gQu){8r^:ngzruhU4O/|-B 7^@]5wy+:ܼ؎+Uc2[m88(9*ac|jr ,2 Z!1c)zs3YA!J"#:[r櫊}: HC%gl.>(RtQJi4ˮRؙ8;vB1bk*3>,`(iw&b6? M&lhT@tC%pU^1ca$-OAl+.ugX[͏:{!+1*P6mBQg|};|Q>ze+qv#vs_P3Զ=j [A FP+Xhvi^9J%J@W˝aabRtFjJ^4&HHLWk]`SudC>;g7N}-ś3U`ܛ㾈:FD#b=H쓑ҎEb`QZxUFDbd:;Ӓ{q4^HG5k{K)%c"t;` 4yȎUJq1pq_3<\^ lZٜN9N#~%7o(w'ϓmrvь&4.D"`Z}_Oz|G@ܻ@M5 VFtٌ6=BF,. Jcʅ\Ԕ#cH[Ap}i"g0nq'XOnouq9stfmW_LO})/͟.@Td 9[5{ZhiĀ%І}evEq&1"Գ*K="< "T]^q =0<0OA袙XƢA1R6j!$.$ׁlZ9mZn喊2+k>[ ʰqBܦ(md('P%h * v %?֖O3\rւqeo6n-3ԬRfy ㋷ CAHH:A2UN))oG_M"q=t<83!$s :)kl`R55,&WC\ )XGLE TҨeό/CA#U|t~O- x!Q*As{%U/t=6o5bޕtA*%7=a- X7_ī/O|MGvVDRx[bDwmmF-Gи*^g٪K-BwLu8B( ~,u8 0:{XZSC1hDm-up-נşfZi?m)O[ X ?m-ӦyZi ?m-~O[i ?m-~O[i ?m퐯~O[i ?m-~zS̀kϵ"=iz<"\&\6;)Aw6cN*@:\{ԐND^O{<ڭTw9~XZr'2#~LBԆ,dd ҷJ#jTr$:}fF)__N,"(NH4X.ID f< BҠȃ {g(bOi 14a݅5DH/yi!R"}b%`FVeOv۟kkE ։E(@'xԎr0t1S9@`/ 8 }A2 I{σc\YbZ\8RawBr?J~ *gW9[>xi|~\.h)V7~VzVl coN"'xuo}oJIՇBh2:e =; "͠OR~nF8?&?<0$׬(O%CLQWf"oOgפ/Nug˟V%qUFh7c%EbNe%2`Envc4.s?^/}LğLҗs=[7oz-:. +J)Ԋ\`jN*u5is>*s|8jN'刑~1swٷhpV&y_[pwk{dr9(ɧw~ZqY2HX;Ttn~sYޓ4`YŲ_NƷ˅ݍ9]o܆elIu\ 8WʵCZHXrkR '3~O.vSGL}N~$e4ro|!1.!;aTēHlONh.g)y=U*6LU| IHϿӻS?۟~oikҤ)>Nd=<@7C/mTi`C[9ło3Yskq'"*Ċ"f#C ρ3Me[o_lFx_HZ(+YR9֜L&9imIYH߀H2ӆg?mXj?tdWRfB;xb}&j1xI,*#]TU5ۊdGZSvQ^`\^I2./[}4 K|>Ja[W>_Kn'~(lIR22ML3e'1-2s4 SKjIbJ^-3-HP &"+B dƜebN F<,3I2*`#8"JsYreR* yZY`oALTf-i+ =pJqg9ِa֕-o"O[#Y/d[Ŕ1/nlqzO1?Gs%K9vŸBrҾ׉:Y:܋.qR4)G靏*y:I !1P=K,ӤS* lܥ(}K,Fg2;Nmê)OFp 7[}u䚈X `tR!+iBIcL֖kL QtfW5$3NOIäx1@Bb(,b0Y,.S9K ?Oo;Mh5>9mﱴfۜ3x?^{˭)%kpc)TU+ "}TR)mtHQ`pd3CLښ#ʬ ؘ R= -ZebvADYz$+I ]3*ta5WʺP7]xܸ7I\4 :x~ :?_~f?^kApC XE`Nb!LF^rœd:Ԡ0"`!KZb( @ ڠAh&mǬBWٮ$31qPiZ!؍ޫ, )^+ #X  {d >x %-M \ƪ00,dB\RׁQ11" c*̇YHTm2Wևٮ{~7(x7}5mӈM#nLd%Q# RX=RsDP ZT8quYX R˥^a4RgB h $ $HE1ٮ'zՁlYbHgոdO(Xe^lzqcx%}$R8%JI[,HJ3. 49X_ARflzzq(w D.n}xѼtSG~qG8wwpkޏ/hcFV~|=m_C'ȔK#:˴dӄCITlN8A7Aes|{l1 [+h!%O,"\JN3@Ќjf5όԊT)VC:qz9}T?,4RwWk|ܐk&>CoY Ks4"!1X)FO%[%Iq c:»@o?MMj&WU+k GQ'iwݶg;;%2/bpQZ/yrNf[:]Rro6f!jn7-7=5X١{-7rC˧y)N+\t<<:҃Yo#bMwWZmEK"֎_(9"j73%E#/*"溘v2BW-fH77t3 # HY&s1V3-CpD;2̤qVd!dF셋ܖx\! gә*zGK2)RԐk"gC2LC*|x3XVQ:^sT7LSf5Ko1 ^R[+Jí:o"Whq,WD-(>oRkW߯*)\}˜b7֧+u"RCg }:MF J3J3؏֜Wh6JDM2:FmĹN&.d9grI jkL RmH.$cN#td c,!*!I5Uj2imM{Pjx@Ռ^3M"s& bJh}) L'.G6 ko#̡In5LG8jʙM]bLATە& 6zAy1j' (&r,{0J_N8Q4 8/ee9NXAe9FΎr6zi570*=~YEdv6V=!Mԙ'DG6j)LQm~_( ?h7 T0 +AmQH )IAjP NjvJx]&3!!`|$}@sH 惴xNY/)*coOtlߓ v6mKdV%Wl jh}ˌgؤމv3M]I@/z0RȾ§* ߲ralȤBdIq4 sm=&,h!&cUW+txz3++(,jMVWω.]* tHhq/g!Ճorz6.~Ώi]Ҵ:w4)M|? ."b1$_fik-KZ<"[-Yz,diQ"Y<ͯ~Pdh:{>=t{h5#|0+Ӊ,IΗs1x׷)Fyy3̷*6U5l;cԢ$dMjږP`uIJ*wo?}seK{o.jH4<"{~rmP&q0Utiwq/8bA1zĘbH1B FDGw8h|1F: lp6HFұ PtE%K6?ߘ8zk6.YDzN%|r=Y.b0M6i~sG2 yR1aW^ UFب")8,hʢ,5SPSdDD:"(ł QTA`n9הRBR=a䐤2+iㅑJmc}J!xMOz]G~=,)7쓺IF4 BGwl#Q(: bcc+%K ,$ (WKM)0jTe2R.p kAA8zg) =0pGwttYVPɢ8g`n gI4CgTb>פ j:Rjsfȵ1'Hk#LSˊ4V#nlWgwFa5jOlz~2~q vh MzrGef|$ N(gVfgAhdv}z.${z]e`04Sgɍ#!+nz"3V^XeCQC.ן~9>ˑjL5F7vc0ބ '?d("s XPM܃`SVɰ㍒ɰc`ew趑 {fpJB$9m@$:z-WIP. ,eXԘi`ZF] #z!Uxd!tS0z)#"b1h#2&"ݰ+tclN6}{8=\ƀv^ƽϛ]6v}!C1nr=-SyٕhʪY>^A,fsʭq gX"ҕP6;B7C9yL>Z`(3*h2 ۨQ1,%! j@P9g ~gy<'λsYk;v3YaWt"G 5uП%-Iڅ=OR2Ã~4 @Hh \_HxW @`-î!JjrpECAd\bB$xWZ!Jְ$-WISKeի]%e{5*]ŽOLRn=•*"歁$.i JRupdzpRW$.m$ħWIJ{+aEpSFZWI\њ+IZO@JuWoC'SMa3.0fڢ݊aq|#x+86Lo'"Dž魤=vR˃ŷiC.>zR7`e68HfD`t:CM Űf./>ʧ+nq_qmNePm<y,![nu '1beP{&j )q&i)>uR#8-+XUWH:=D/$yS~*h62"%|X |Ґ]f)p οSiX31fxN%C[3??zuug{4eyb?p*ԗ3b;iLW||,8f7qu7·0o.I>F@{H¡URVw/Z,<%E/ ,[a GH^E`Zj\c x͕FKz)V#A\K]"/g\4^u:R-^_no?2Ey{?h&ݓ{nxW8R-,0i]j1-P l1j ZceJJuG!b"]&ka7XߺJ+N<98l&[cPIi'NhCbRlm$<5eYbApEi0%dH[.JISIs6?=rYq4[wt+J zQ^^[ty^i4_1j&\7D&U2i>% i #3qE D& tR:`80?"v Rp!slqiq0i-#JFK)dc\{"k/%%&'%biA%04 SByB߹x64%s|-ɻS5@u/VbX]My|i+|;=7<EKga꘣ KsV`\Fˠ-wSh ڋꀧ'׾Z!{g9 VSBbjvAE㼶PB <ʈ:iy]7} 8.& zKU" KĨ.hpPJt3tݤԢfn?  j+RԴe׳! \z^|Cw&ooxS0tKO70r!3 |4s]gϙt<_YU*55JbX#0(=R+Fƅ'+hm5P\;Σk*9%+FPu~mWԫwϒ-6_ ߓ'TP\_=Ct{U/y(Rڒ9,ܗ>kǘºjţ|j֕|fUԮUʥ*{xE_ugd~thaNH/IϗEwޤ8L+:s}`*lwuDPz}PlPsZ:Sh' z r蠪zP &oUcZL"ٜ}E3zaYֺÂh4vҨznfgs%0<'m Y8vNQiTt1k=PC`Dt`\\isi<3w,&J̾q/e0x$ico<&WA ˖&YRtR!# C= ޛ !͈4{b4~b}D:\PeԐ<@眊آKO9vG` =KjW4 h؃u-6s&~'A-HO @d$@H(J ȋ(AG:0_MU2 ǨU1kQRk R5aޘ8Fd']8H\K+|1Ps0xzW$RS}h}ճ퇚aY;rh[i#}Q/$XxLР\"6Z΢Srps@I r׿+r=a3ɲٻ޶ndW~Y\`Enn-Maoa,YãˎȔ$G"<3gÙɒ=Y 5(J1 :alWAesK3@F"V&Q21n=MsA KbeCHkjDҗhVnZHl A)$_X[.|! ͓,'2G[1Ł>$pT.T@R=943Ją'H)^(NPnPmP:gr/ @(%h?2p=Jc7rA"|0?OO61ad`&S'݉#`N5J?>WôԫCru()q%;yc\IN"O3R'8ogE\GXI̛-0^r[dlM״zQu YGͫ8$L"E<{׃*ŠoZjgg5z /%JRة x<.N.u5 ~Em8y}h9-Z͋ٺ6clu`11fksޟ|mY5'`4|U͋0Z-G*I1):GR.9:$)N8Vpx4^.f٦mXF6dר]ssuZp)#m)I[`E]='|0K۩yO+v+WwS(^7 &%f !⁰袥S5/1UCG8<#!o}}_?kj^_?{&tr? )` y ?>յCKvɚ^,q9S51w5zMOEjUAtGC{Vj]ub}tM|]'~"IM`%dX|چdbd&1N)c2y:%V:H@ކviZpLF8+cm%K# g J$i(&MN&d1PS1@YI= PʡQE[U㉽HvThU=hV'ʝKw&]U}/Ī/YB 2+߇Q6ol(̥&%$c > b*@- 0#HP-"kJ&232ex2,J4,rLɺit(e@e#gMV(7wA1Z)1kGW#g;N$ t?I?(φ+ӦTz8XغEb> $TgS.E _PL_k, "UUjHjB3GI13m5'-րbبS%sY%&Fi$/B,{rjy8)L2Dœ >$A:yYjXebJ,܁!C]H,sq(=W ,%o39c2xw u=HFΎUY{o8:rM$B,Ei ꘵rNaIv%HTh s H&3z*@Ŭ%uv~׼0*X!tNqu29̈́h2(L_^Ih5>Nf6b#fnmΙH<9kEZ{ܚ2͉1]vG.Uj3c!Cf& H#Bf˱nq lNzeN\ʒ R=)(RrVb2sv5c5rkzX^6Z.uus㮲ή 2H\4 :l8^y& w Y1!7Hg B,$8a!(_mWQFjH M6uNdV I0Mi,JVd3v, | }2[;Gi<];Ek^kvc*C!gە,KMVȤJ4'F`9KUa&` :YȄ ҥP ,20CDbYHT*jlׇQ̛ZD7}5]{1qMMPGuKb *$'Nl\&3V>HR"7Y%r31EpJp+%-r"B;ٮߝ!HzqqLj\^^^&Z$QbL8%ZhNHp0)"erZf.CEV#ч>Y@m)s4/;_4qD(mhTHkZ4Qve:siLFRHWr)bI<..B2Ȁ%"׸j_T?Vuͷ}lft؅wޙ I-IPoȠufiJkjοn7wO9ntfK y5e_ؽMQ7wHECk-7dYԓI[o?m禺mf^ cr=OK =K]T0EtʇoRfШIFFt{F9:#nHdL 1CQ:\v $Gm1̔Nd d^9+׀yL\ Л̣:O&("0g k0k"gdywU pm>w= VS's r{l'ᲇ5` SoK'«yx>OAkR2֦4hQ2cKg ,V.d-LFC)Gt3慰F"A8+}q9;$#zLlxgLrVV}_Q?pg* "&&W1,L *. #qq .*(6w1iL6LS&O"߳[̉,O׋@MKAO?1˘\V;Z[FSxIoP\I(ob22ARB#xA- {;=֔q#G|#&oi7NZq9ϝ JpeGqTRh#iܵ{w__aJqXjF7pg$-eI2[9k[nA7s ͟>8)胲uE GE۸]@wȫe7g+45 ꆅGm]+y秋',_`3s{J ^`~)7hI ]oͅqz|5YZ?^ ࿿^',5t%ƔR+2FyhOU4āWѠWECP hf҄JsO L&u,\ s˹}ѩP0T|%m0RS-yA)""'^qjO6:#~S > zIґ1sR9ei߉Gƒu6(sȋ@ 111Q!ap) Zj(;,\`HDBe1Fawb0X8N t1ƞ%edl\ )j_.\6SƘct2tgLP$QK-9+R 3</תM` ǐٜ] ->{o]SK@[ZFWvW7.٤ړIr&y߼&]lR+R[2E%AZaLf;E/XeůAVV=hvet?ɑ]ˋmdiiu]47 oЗ_ skE -,(%dkU,d)?pR1U"FE':z-WIP#Y6EQ 4݇0bQG"MXDꥦ0+C*A}:#g-Ӊ5OF߮Ea'/ w]Su3zn\M/f4ks`p_ٹ0?E"( ƒ)fsʭq gX"OD Zq kC(t~41eh4BkM̨3hP1F刊`) NPF9KZ;d4ƶ ŵ 6EcȞ~#YJ+y4 wԱD-=xu%zR3I]Qxi|4 jWZ;?Oz~ߝ6vzgu%rfa>1ÕH(%b) x\tca0IA<_4\Bq:FSUoJo WOjjF#;"GWWLKMkR1 \cLf7Zid ۛ3ם\ɆB`FIণ:b8)-jq/ 0yZN]Ҷ;+dtYT䳪hԽQ? x8\I f9E;WN`r;=\ɡa'ӧ\̒gR%$>u殂(F0#aX牥@5y.ss&o3hCcߦMyJ []P[M~T9nzVomZmOJ[mlO`M"|4Kmm3h{IKjk9\nkoGd5+nk ,׺/'Ͼoml7{k}[J=Id[z͇r}FZ3|;ީA!n7ܾcwXu5 y_ORsOy5*=Mi&b$4.r\):6isݍtO&wLK˭LPsH1o'Mifl.raăyŒ"b#"437_JcyTnޖL}riFcFEegj|h~ PW_^&NGQlų#a' 8*##ZJ i),qXic(G##i"5Yux^Vaz5~k-rgBF[k9s9=$8*څ-G)UN{"5LO8b7 Izh#Q*S2FEi`A(A=@眊=u>w"gSossB2<dQǫUTࠉ $HO `pH8:D'J\Pf* ` ;gQ7XVZ4W>1HGz!cE@r- ٓaom\A wD@4:&(fEw@K 8 (O E>*3J8> JiC᥄2H 1Ki_po8όzjMei7/n\)T .HE&!&T[kĉ͂A…, DTr$&I`'X#S1:9RL݉pi'I貹6WÀV3*4!Zr3\ wguvx~8( u%u9\cS+"=OM{:=M/ǽJL`Hqⶺ '“$y,arvcFՙL~*V]nO!FX~]jz|~0K'\oa/fsK.hմƫya['t iFn#HQ`gƏDW7}/M0mu>ȶ^*xk\FtY`&.5 ">h/QqjnO/W jz>!) KOe(1Ѓ6k~Ru+Pd&ju&rPp!˅/@H>? `<)hO֓O&' پkɻwMۢkEu3r-v/IwkȧS-HPo@E0g@GiYnM6>:IFJQCi .P Ƚ_R=>~ G2={B_ڌ7 3G61dbBqj{cmnFeJ_.#EU+;.Zgk?dU.ѰSBRטD];gi`O)ٴ_"WG:&pvQ&bCv{rr t7a8cgńF 5"2/ Q|a^eG=!4ttJ'a(^HmJchT')cgHH_Sf`][]sǎQZ 1kwm{53q%%&+IT %C Cl\ܥNa&` :EȄ ҥd,20CybIedmĹ6A8ƽIJ?][Od( cˆ>2^΂z/X2 HP$=i:-wA>*)#T["i +NL=SW/Nٙ/ ֱ__6ZdQbL4%ZhNHp0)"erZf.C/?L;ۻ?|vKݐJ3Gg/8ձMhwH)Q %VW`oJ :xTk ͙q_p-:(uކ黋LCQAg˘_K&nK]D> II*cgdd4dM@\g̾6y }аErCM:j{ۉO;an8Eז( soq5|UbEU.xQU;h:cz@/?Vm)**:OrZ>:Hd)diPP?e7!G7~Z{ gs){cq (E ̈́zrs^׏z|-xf)-PR.ƅ!K c2p@93b5!8)tXjQhdW6WZ|T|C?2HKlV YQ0*ioRJTPSx1y5.yoB YY<]1ol\ŶP7~9晖cB?a4iD(jmUHcZ4Qwe}s\'{u)CUJ r!Fy`Ts.[YK)$ ["BS_IYw]xڝĦ͂ -Mgݛn u%VѦc _} G7=$1/62ԥ5]woos4=԰4Hwtݛ6jyu7cvhy0x]t~s:Ws- gmqz 1R5_ܺ+Wyʰ YO,M vؼu[ crҒBOt]9*cRA!9}HA ]n(H{KYpԅ!(a.;s2m٣Bfz'sE ݞJ9+E<&Л̣:O&("0g k05Jۃ_ާ4ekҿvy;XM%V(xH/a|vSvzM ǫτWevWST:hВ˜cr譓 fxmYLwOg='cԆ~!ZIM2ʐOК)G. Z3;.drU$B2Aer4DEϘ TsB<!}*e;G Fv(Gn]o˛* q""bb"Yzs)I̠R<'. ̃u,&T3t,}u-rȘYRN%\!t.nA)X2ItˌX Y*qRd$Ga/犅$L^Sem:c"ˎ3qvxVeo}M#a!pd>e4˝!r9{mŜ57.MעsG~f1v&>JoP\I(ob22AVB#x:Z0qR']+9"2F"6$4)GLl6s;;N^L'Bӛ!:i#Yȓe4f!_[ҝU= 1)O 9)Y꭭bu9G/zot"5>ݱ-J`E8Li&C7VCwM*`0`ЛGwc{U3 u|2Bn>@- {O2.ݘ{qHHM)[ݣ3'0O糒5Ð'ii(Z8'kXcfXTϓ~y*:&tL!E #, 2)K@̝ -*M)T1 d\H,kmgE@eQxt- ;7O=ySK-dw F\R(tIZS))tA ]䮜rBwEJb]?4k*лo]y/]y͙/]\HZ"쎯"_Z_W4Wj^A=}Z?4Z#YܖA0'kRP?}j\"dV0JqeNaDZL^!6o 4N=O1WoVߚ5*.x2Uؼy1?i?.y#m0LNVAKZtBx/4O!bꤺHiiQrG@YLH%p:#xvC_ϯ- àH2Hc2ou/q% s Qɹ9Y!a'-q/8措W5^xUQgQ]!^ Nw#_ZQC r]vm_f]Yޑ'D#XFr13gٶmF?uMOW+}DCk0; ['|Rf]%;Ig7! nz_@=ug?כ+rm/-_NY+i1z$L.FJvq>LŦkdOaa]Aaa^y_ 'v(~ $1!(ls1l,p]K&:*lM|#ye'W^4݃'_b=w/q/&UNEIЧ:T;肀ԥB3np^190)rqY̟{>BׇXQ=}';H!SڸP Mf-NЩf8V|pUN =fLМv5[C?]ٳB 6˺Gb>dGNPBDOyOiz줔 C 2^Q^y"]G,=)dlN;w/IT59z]ǜ0Zzc'.5&5cw̌c~2Sa &eX<,WrPq^&U/Az`ãJoSiΜEq Fn^w[\dRV'g7x4;|ILjc X=}"e7S*DGFΔ(録uO5/[fR9ދ҄ 1d3j`t袭Q9o\LIZEmK*3lQ 9'fJDcdH)"uIKĠ1<;;l4: +^Am[Go1CRLQ6kSuW&;JQvƦAlXJl h]ʨHR6+m5.@;;>]xzW:+.*yxS^}#ׄвi]x&S=zEs͔QtהR[U, 1i; fֱ yhޜ}p(PcS' -L@`}*!9A)}NNJi 󮨵elv4Vƾ7nwxMه)]O'@_7nJS 7iNE#l$cMX"h -%Ү}AP=LJVljəL*$_ط2d:;-v8OK.Pv1G="؍Y)"+mVMK"T>ykaVd)J% :(RS{X!EҰPulkRTV@ceƚ1(XU!{;a/naLREF8Zč}jcf QZ&%S)X3CВ ҥgץ9H!5ZU)0*Nz=ieF/ @;-_o2:]vsYgU]T] ],Y1^A-Z%#e"f emt^Vg+hab90aӚ~^~ܩq,O!nLz$,˨=N0o*Ϧpe׺Qr fuK+QvԕNÒn9}bm|:`aߡ7BWb>Px7%VR6&ϴOĤ 09e-^ jcQI+( ْi H]b"նp>1;_ߔTe`~LNS^׋6Wp>K~cϜP3V Fz/B"+(`wX*>$-A,B]e<0F>X'fga8)Y51i-8~~H~v#Zo;dmpğv[G6=w]f=SWQni6:_}unL[ȺeR:9590 sn9uGsow9ܛ.<0KlEҚ ?}yoɌ%ϤݔwN2YquY'q@ y>`ON%2`<Dd>sK7o<&6(Suuv2"Fc%hPTc2( z\!vx-z:HE4jp ϰrGȲKd^.*Mh z")D*r\WW no.Rx۪g\9aQH &S[T0=\bF4xU)6ǫj~f P-ZRTac2X??ޱ<K[e/Dx,8N'5w)9SR.ĂRj"KFۖ5>zr12Srhq%:Z3 DPʫl>Zy- ֝QV:8ykM YJ0k.+9_婹u2Ćcn`}_Onv*"9'g\̮M.5 WqI^#$E 8jn3\MT`hCVT( {r5B22Ƣ F M1Q^iBBΒKŜ Գ=8yƠY3vvԳ^>}jذPkGХR}JZo#JFZn+l>u 9/Y K-Uz;_ނ"k@e *VQ .ԱcV9%G jdF?Vy.V$r:1=} $J%j,Iv0Q $hɵAp41Jk&JceZ{ގwڦ=,}1!8˳Y#K#N)hՅS NOG!@kFmkw"¼\%CQfKd_峍ozx%9$a,PY-Ey؇XTwZI^'N=nm W;\cp/fZ7V~g|2uؓHԈE@lD&ZKnM2◻zMgg}E U=!Mm۫ヲ.8ž)jϛ\Kx4 ,69K6m8s!agP&~cYDΔ?,RsMH;nz^~2`ݡ70A>t&vyu6M`kx SHp+ڪ* *D^[D7 YnGzr4sr|^ewڳ$ˣ FU;E4⍆2?hC |, l=\\^^׌iOҙɹ} *kejnP+l6VW5SiGR)ü"蘌SuP56V9Qi_3sFČir+(!*,SN8^.j^4j0Jhkc8R9JH:3*g,ygeNڑg5Wݚ礵{W/ߖ bCazn[ey[{&G|+=UG׽T$aSu)WW|TLi47;.A՟7a /)°ХLS-8ZW;VŁApKPf*)eCUR 9E,Wea]/n:Fgkokk7 q /ű:v3)ZgZOIK&n:Lau%.6N*0GtIu7nR@=r~[cA,<1PM!V043&.RsCALF(d"KTU:+̒LK%LJ 8%f"s7״&Ζ\Rʁi>?υnӿX蹯ֳTtvJ֥ԉyjFKZ i4|i2}ZqR e"*@kt92ݭ\vAxs=p@xen: $m"YY@ǣ47X{ oBX2F β|&[an  F'uښ/_GjAN'kGQ1Nn7!~[!&&5P9R"sVPz-=n-^˲^V1wN ߏI7_fz}ۏbݲ^/m਷B81c&Z%D1y,) 06% 3 ymNH9-T^+]?_Fe'0:sOKy8|R<(71a:e+ x@+O"/I$XzX; |2=x;wRxGUg*ʴ6qzb, D5oz/G5NuA}*0iчx9 d'U{4<|1'/Z}&n}%񏭡Cٴu4/S6rTj#`BTbUT4([b]eFڬd"kSͅ232eR%OV1嘒&vHBWk}H8;8*~TDzŜc}Ŝ'X̡a=K>Y!Õb_ Ҍ|äg_pg1VEŰR9*pKSRg:mNu q8Q`O <$ Ae :WL4?X*e@PsK+D&&t.0# TYZm23 Y/7I}Dߚ_ ADQR+.aŶ˃%fEœq]h)B?bנbԟd³7;u{ChzHӐk|ӘֻMmhbh7ԅ{i*o tȀJڐ!X jfcfr"q6D8䂠N۔} 0XLr.bؚ8[dobm M{¥Dݦ7T6˝\O<Ο]P`0'WY1!"Jg b,8q#(_ ̫L$dӮ)̊{!) lJcQdO6mǜBЧ̢um:g3b0ڭCAmۡv`񆡐3XF,E6YIcJ(/+iRTr*f"F\L̐+]eE#Ph8F)HUZh;5q6am/j}`<Dlm|<]ˈ:D#d(#X}`dќKR$6jO#[qMʭ"w Jr2Y%rBtJp+HPV*Į-#bklFīSO'\iQH\e\.v`P9)|DR-xhNIp0)"er4D\.[4–okr=y=Gn\ |nomSE?>SqZE\+^J_1~YBtYMC9K%=hhng!wݝ͗/ɘ.HBI eb$qF;CBp]V}d'C `~ a5r~tq.u;8,{~unt˞|WH))n-7`rQsϲ@aM"qa:D "<0t EI?SY;h5Xo&!%Q[K>+@ζͭʒIR3K<#9}V M)m2 S""!NК8aR8$tOC 1-Y'g jIZG5+AW"|jʁ'xkC<$˨b=FD}Б5('Lxxz:iy^py:ߜgSo#/rF~}&xtޛO&o\DƋ*g'qPb<xr_f1";qL=qJH/õ>uW0LY`,{*B4JU=Т=a'VZH(0 }.cs0hxvgmVN@ӧ[p[=m]ЮEn޶B~|N [o^l_RSJ.^C ,\ՑcM lޕ$·]RyzEw؆`iMj[Ȫ)IQ%dWؖXɼ*"8Հ2w+ݓ{`ގwԥ֧v6lr)k<<(a#NWS>]IkԳR6#I?Վ.JfXx nKYd\#ܯ˼ǻA!SBt'~+R\ބr2%?ģbyR4b3nU;\cTuŶ˵H䋧VĶt]mzCզmoax،dV>\}0{V{݆0xW5fYm7 Nx0\=.?,ȟ4y/#T?qI+}es<K㷳8xY-mW~u#f!ڎq*g'Tj112uC2uxF̲ f.\u .ƳFNq0T?;Z ٝ~ƛcx?jfy{e{8 v֨*1 S -3R۳13үx p=bόjKZýc@mn [s9OnS,a6nu.;68{qV\dha,x0k%_Ia [hliؔFrQpPFyȖ_%uIjS q˴RL=ڹu|{-y=n~p:ҤJ<=u*iţI8tSm"XqoOO-~~^@>ە?k..켬z z~>/ zMtQBl)|1yFB#H!M,raB,!Ru'h?S{B>P:WP="Vx`dYVU3L7<X?=ta _,ӺUͷu˓"7WmN٨]_҇,ڿ#kv _Rcd5o?UR@(ZW<U_*Kռ$UHQ. G@Mi!ߣ=D.D8'ԁpFPJ8eu+T}^~4[&`g[ f44K*cUkJL@K\y:<Sn"B#UƘ npgՎseLC `Hh"Ck9R\yg$s`g2Dypg۩ ~gLJ 4JSҘ15skt% t y4T&Q3Z3䜀$QO^zx! FR GeAgJFɃL. sT EF NLXT=Xeb}r<%!(5E(/$XKi8Aa+r`!2rN|O/BsHރg\F+hONe{sV;R"813RtR @K&E `g:C3OM̠xD@$gZXÀچڨL=`X~bx?f sgz>^^-(fUS@|07..e\Vc:a+%C ʂ(TV NrH48l儝ݐJrN?YvY~W"Tqx>#P(wNWJr7r?623e@˗ωAySR-u[_)..72y|7tjC.E.69qOSƚփy61⯥Uu`v2r_N9UܬvH4O?+WvQ=G{aX04N,R9i2cmGw^g?bM6UqCrYȈ )|~L}:GN szFcgrJ?KTv?ߌ߆3ow΁=b۟"kcho14pКzK qW59C)B41i TB^uܘbY-Z -߹d%V[M;Wإ9"SRHe@HC! W\=[ڰ.E>_dqXHHedLY"YIAZTt9Pzg"&Z/)KV.u:hMhm,{,o}}veUUƪWRR'|5%ty-%l%zIۚa3.NT+H<_2ڕ!peD|ո(]yT63CaRJ!# _a&'smD @1^isnk۹untDBOLds38$0봖guV̒db<)r8$fL¥HYIǨOoEh>km:GY. {5)ȵ$8x6GT 'O# :+)%umokA}~a.hk 4>oI9uRr3j|IuջDGS6Mrw- k5(Tm^y!uvϫٗ0nA193IW)`t:Q/x)w XS;g+!2$P6\AF[̵뱠b1J&iep@ (@ Z0*QYíA8[ƇMȔOlFSp>[q;[f7gWgր625@Ol<كe쁮S'7Fw|38v{6O(nlBν~ҳ3)c+l]`ruQ=nxve{w[زnm~Ww^'=9絖a:otjg+o~8PpH_8=kmTnfixB+J_M <]I_K <[BgS~)teUOO-{R5uH!ࡘOoF ++NBakͬޏqm𺧖1]qɮjp;.Y&o[e.m- ysX4u MUEk֔z.hCL$]Tr3&xGU:5هD݈\^/Wq:IP=pqq8̱]_PXz CnMz7O"w {T0~|;oMy-OʶҶ{`2jW]KZSup!{ iFr! RMFqʒGE< -@ L1|(a"hai)<1 y`NEcoS5EW.5,%(at\`R 3BDN H2e#idx Q㈌ QJfmNḒ*RIfZxj/ɬY/<#2|Q|K#3')Y 4^O _En :x87ψd4Ӣxh9r( ш%D䅊xʎww}Pq$!t BS$+ 8*d$%N9g:UI2%(rTcp][s7+l: @ Uyqڼڇ=.. TDʶRui IIQ4GXzf?| GeRL0LF4 ?o`י8^kd.6Ok3.}C%DJ9SsL1.% IVF\ {zJ!Ym!1Am DR)A[~kb .wR[ %{uU9_RRc)֓ChK *\$їLLDR5zdPC] u5D0$]aISr$"eᔳ%PL6DoP]/_6fqx(¦-XJI1@2a1 KHad,2+fr(Ւp}e~־Qv(xbаDˁF@$<\~UOni@vmر&ݐ?T _"%NʗlL4J0FuR5x!tb4;QKZYm=zXVtpxŤ3qn:b`bt6tѧ}0KOVB|yߨ]SmOoM8__Q17G:OŐe+!0 /ge"ڸ>3L9 AO^R][n,zd 7)bd,.)W(Hk2 !%;v&~ ʳB|exy`kwS >Ų(nb,ȳ||r Ow֪AI_(n@z)vWOZXk4٧uu;ΧrݲcXٔ:}BXuZKovifUso#_-jQeJnUm%zs}J_lv:zߍ]87.a{sQ=6p  UbCĆ+5Aל )f1 : o!;O*y( B;WPjjĖe/F}"|^ RZ\PFϨ bPmJyd-er8Q<O*%T7ɺ5WGұ#PzBPGE (t'GM|:PPЬU%ȳ $:!E$[q?qzQ}[5mꉔec2UFŏbtZi76/^/NNum[QY[Wvch4M9_WއÝ-s$mtk{eYl*kLYOSl:X?՘⦛1*q5{ॼu^79hm#u`Y0Qcy$G.֋{j)8]"S{Y7_?eCWخe+,a?a\hȈ2<.J/[Eߖ8.=_~?꿯_˟^y/y #)"hE@|_ W~hk[ Nbh]W -f'In!ĩ7m%& T^ךp;#&w :[;IaROuM%LBNk^c$ə2$&yXmxՆ54٧5"v 6olH" h_!/ 7lgT1E!r! 2[@QS'QziNy"uaل>PŘΆEK,fC$ιTT (cؙ8y"#ts 7~Pn|{e?;|3sGg^`I>Djeݱ&% &MͲ мIXyB&iA.%7QZʄ0ou*7֩TXduJd !ju! (eF:"c"zT$t:-Q BS.:2 1ZTEmP %uv&-WoЕaM 9"XvΠl4a\GK2XESC%-W5J"[@Lq?YTtd# 8ǞZ49Z+\P;gһZ-9cZ9^ek3=uV|..{ ݞo5wE[?c{T|/EAR(3Ԏ>R d-ƐҀF㹙 %N-9U3' HC%gl.>(RtQJi4ˮRؙ8;vB1 _ /%8xH˃mʼnUyZ'4a2}gE ** i| }ua'r,ح P 6U>¾p(BJHuѕ8;Φy jw}Am;`o @KadMWiYNAlu?fVi1,P%+N01 ):#5{ f/]GXI$T\B&T}D5TW:i[f3U`M`Wq_Dt#q@3qdcCX%}zDF*$`6Ϛ.-}i ժT9&ǬPZ {J[Vb:h3q#į:2.Λqi3-'.*1.\Iс$|YQ2&B'& ΠIX .;ӎzwïa϶_k~1_I;GުYRpF, ?n~|G#tԓpzVq'iָ$RZ5${z*wiôٸ)0vH oBB a?ӀBk_!F+hc3wsU3 1 }P!#FYAXT1B.jʑ -NZBL mŌتU;))Pk sK'>n;L|6NhΫ/ʲ-oٓȺl|PNx-Vb󐒑^gB-"B=جT8 WA+5a'\)]4KX4h=FF\RHBR{ƨ~K `m,fy鳵 '(4mFrUf!.Й8as!fG 1 _>Tk-mVpS;gYO͟9o1ćtd0RS" ޒ&wM"qt<<@N!$s :)kl`Rf5506&WB\ )XGLE TҤͭOo*n}T>:ɇd'҄R( 9 88:Y}[cN-vmGġ}B݉uٍ~ dG| :Kf>Kø20T~&ɭ@9 L1=',ٟty[""`0~E$ž+,jo=O95H&8~vkmV;a{$}$ ;#ZvkJ޵>m$/Ҽr]e%{YWn]yZZS$CRvT$E$DB%O f{*5?FqT_*#6xx&,g4pWhu'bz~zAe&jǹQbgnb*Ԥ0߫mz y]j[N& choƐz[.A&47#_Gwխ ңk{aK=؀k- ZP\52&x feii[_YA~7f_m't^n!»UjyK|(X rX}#mmT֭X̯V(3Rxr}k.ڵKy˞@V U7w-7f׳R }{MmRH褌Ȋ\ fpSJ@^e5&0'#,Ԟ0mV(Ia#wV,N[|dF'\!KV qr]P8S.oa5Ũ T]De\bK0i9} 4Eo%&wH jsx{6.SIZ-McI~1qtNTTOf"CR9b@- 3+P{3,6̨$.;4Ny=iCZ[.M>lխygwc޺$EOl=`δӯ|O&чcx^\cŋ27`m Y8vNQiTt1z(KMT"21 /೧"sVF-VF%i4 > KVʽHuĺ3×+<((h=zzV Tvc]ǩCZ?T]fJ@f +?(*0J)2b4,V///~APO! !=*?s&\MZUO$_dtw ^1-{ڶ6nyݓN֟'dЯһ,Oڿ oviY?~dQ-x5eUs_$A!nKAjhƅj7A/A# 3M;IL]SKMT8%KὐG2P$c  @J'0\0PҠmPV# $0A24 zNiu.Mv2\p\DyKUX#JKҌ&. \gtƷA|D3]pR8J>/wyG1b11DITPwQy0Q[TȹqUVpTN8SDWA: { 9BY֓~m JB! DȄC4IpISV BJ Ѵg>m:,gfMBMK"98FlanI.f"^i$$VSͥ!z.:@,-;Gj2FrEaڇaK5cB|TzSlpvҜy;3"유9m h)a0Ќe KcJiIƣEpTTgo4ST',gT缿TpcZL[cj&L̐ډQ*%#-!E)Gn#RZs(_0_o$+񞙯 RYl!ƝC1'7츓YTQ w% V J0 r٣?+c!F!r;ėՙA;>24)sy_dE"4 `G 6DVr$qpwnty9ʵOK&U]7I_sq;%_:*w/fdfQ[mɘs%ڲǖ-c|]SaC0M't2M]BRm*M_Zۯ}mJdsƢȭ[.Ƴ8- b pe!?!KnӢ$JђdaE-)Qp2}LY Ng(.vj'Ԁo91t1Q)31@pLM> wY Wr ܀ V) yR!A, 9A[R.k%%DcyؽMgq*Ŏ˭_8moq-юx$;Bʺィ>76llJPBB,ZZJL)0">Q G bV؃)A>JS^)Ʃ  |W>Qj6 /$8ĶK\% 7JJ)V0zsݙ;&{Zr<\^- [57llo#7Zڿ}S駱`h8&ҔS^k刔AX-:xG||F]`HόAxLpjqQ5p@ I" AHRVa0/eZ L 30Mk1hF:+z{NjIl>$j&ETc\kbkSE,+1G#ذ}6ZW }.f+$ic`gms(ǠvsSqKF"OT $hՄ0K0Qd;sz\0r1iW'c)Q3qD`9, s KVZFOB_ߚ5 ,D[ Sa3LfK;m^1t>Mf[|)c>};1d ո1rcQZ֦Wh:j.w6{%3ۘy)+hʟ:t?up(j0T=M.GZfɍB"O-Msp(~^+k)LH5%a4U +&: 3A%'MtASo6 : U@0+ay}O(wpnS,-1dpZXRe*#܉G>H uD[PPy40$FhfDU[5ɷ0DC)j MTj4vCP q`9-RLp7Il׾hpᮗ]}q.uLx$<8׉u6 Q(:CpL|('M$F[&ZXG"\JlB OQY@Zqyw辦⾭9p8ŧ^k꬐éd ArWLQ\$Hm@lK _4ԦYC#2\*밊0,%VJ`_!"JFcD4V tXܙ2aay  48!TFct@x(S )aAc hrG; K?ɡj~ϛ׾mVnp݁r)a}zV,}is9b%z "5ˑ,??Y8Vܮ"2e1ڋ[rrv:1/`'{=q AE&.䔃(gGU?*: u5Zqr 'JMD)q$S+a8V _ů|[\ 3)R׉ ']N>-^WyKvs:n%vm+լv9bN|>$J?ϽNxp}(k]'2+d<}~2 e$"_jY"M}WsUEK OsQ.1UU{Č O!8,`k U cH@t@Hm$8?HysQ bwi\7E6|$cSOaX̠]:,>eC<-w"g9m[_ͯѧa `iݤJ4%ӠP'aNPK~j]:Fol q>\t8]Y"bu=/%2anS@ǖkwR{~u53pa^,^o7/~]i~y̶p]Y*u;]vE}pnZ &Q/"*dISWõL%`FJ3{KNZ7) ׋o"ኂeV^NGXtĤFuh[:*(C qRϩ|Ns>^پe?#{nL"0I%)ēyc{ :"ZX-[AZJG!"3G%TX (m"<@gF:](d¤+n>p|Tf?|2̛GJ~}fAjFi8*",\::UGٍta6I'rHKmthr0.'⸸aH潯iQP@ޘ\S8<`:C Y̫x4j'zz=|0[pkjUϪt>4s@+#y`*4PSb\٩~O?sdQ`Z'|;>Av~xކƒ4AEDh<9SަQr֏8՗08C!?~|ۿ>?ǻ7߿?_ޞ?{Uw &G#5Zv54͍bhû|f7KW*<)q>qdP@%TkE`՞plEr'la=& }/HrITV; yO[Q4D%PIƈ&DPnÓ6RVevZDx foxNVYqLYР4e&E|QX Ip#-p0N':S]^<}W:FCg<@fVVyב˗yZtGdlC>uQ0#R JA)-1zL5rkмZrj1ʽ!j\C`I#!c.E#@QsEAՉa>D!o,KuIPUI$%`$PO1&$f9r@iS׎YOD.zBQ->keN_&8Ycl]{V>zTLC\j կZq2Mi>|҅XBHRKO5Ln@^6NF.QL7 k) D|vE rOiKjhC c$>E:@ 1P (cp[*PHV'S.yg+uuvFΖk!{"Cwo*1=9AH0F*Qvy @s;̵T]іe7*X.@&S < f\rcR2z ]'ꊜ͂ *_?U5R/h3>y²VyyGU[mfqGU1B]66e񷯿u >rIZ.JåDo ͠r)Fʘ8e=L0Rd=II8 QQBJ@Y;#g32vtqCPwbᜢlˬH^R_rlv^MCTɯn8 9P`DX : A QT.J''N"#{YMtP+5Q'툱 $f)MG]xL}Aθc_P[kk0.(4+ijY 9MZ&NRr))#)aBo$h!fHZh:jĚ$(#x|v%(saglʨoe`w㡈h:FD#bk#qdrP ЌZOhXx@U!p4hTbp-)8Iiʣ,eZ (9ВfBseLj9Ӳ}Ktq<E|Ah-'!pTTs*Pi<d}AB}θcOxDtU>?E\<ޏAڪ*nSߧ)E?I9XJ<-qyɒkAHiFz9==Q{j#F!L^`6;p :RXxNO,r&T =":AIÓ@,a݆{=բOE.hc8 }L/fmHWZ_t/̞n ѼAfr چ5UNฐh!H.g* .J:Ha'yaz`x``?8"@@ZIƚ()]pJ +h3[goA653El$4Ksޱh6{Җ.& m_7 wLRF$f1 턷1=ӚMLgM须Z3Pk7V]ܛwVY03]d;5ݩif]9`U"FqFK^0c*ࠨO!I!嚉G].ndTBK'yٯCcMēh'GY:et29:ξ^N˽*o{U/8DCo'su#<'uauq>\jvoxU*W{M^y/N ǓGl(9?Q ?_Nqr}R6XqTu4?P.*T>FPUK9a_kBg*hȊ;@*$kwIht9Jxuz]O|Zkw\G6gf:*~~])EQs*E;@ګE]NF|p?N:{+rٶ|Js|@n,Ɨs#hԁz_=S~.UbXv\}ꆾJj_%u_&g t8[I p-ɜ g,G ^UOçAV-fu ouw׮\KZ용+/\6o~0$T;Cn&snvg+g֑ƹ]vqm -d<^_{uAv[V>ij|Cѡ=Z8wܱwS}M>wem$ m#ǧwpe+3}sk[Jfm}( NԶ.f:Au>ST@~s䤛Ư#/'.#gQ|^xi5ٲ j'1) ]"1pN^n=N`;t}*lBXiqjN,E-smZѯo f]i8ৢY_si0LOS^= ^-%`_gIN=[=A.Y;:tA)`~` 'x ܺآgsbV ?j")uvH\7ߗ[T.?L^w[}{y/bԕrz/A;nrr (1B.J+Q9oH 3.Ƞn'ǫWW.G.A.b 60.Bhtѻ&z+ZK.[7k8к\}DžKb7]VSVH:`w.7ܣ!$DVVQGRLj}ֺs&5n[i dJvDѱpb6&fϥN&Igǹ8Lgƛ1<Ҡ6I'1ˆȹ Ƃy#rk65JCCG6eX“JCCV3 I}++ \"%{{A(4ٚ3)E&)),,2IIq FAzla05ͭGW̋4Vg61% |Zv m2fF820, (TU=s,;i'AN6@A-3Ȥ)[(8)'PVֳjg^lFnХA\Pd̈/3|pGhδ|ʈdV+yʠI9*mS*tNX90$_A=HeGZApw'͕2!hB@o\Q2As,7IT"E\Jn ǖ QUxh͂TCF9;a1yE2u`&{摯 عgӯ!s1)KX6H@,[s$x.Øa&T )#p@@e|)e@@B=Gkm{C6\Zs6U򰷤^ۥ';>pŪg'WNX p `vV}}ЅAGrjiG_\jpYmK:͙( U K xi/b6&+o-(EJRcZﲔ fL:Y3>-U*К8. mYbaUۘ/K4+K`=|@VzsAJgY,i"&O^’f#0>ДN[ݻp:M&`񖗫M/G(Q |7o._Nz+888EP1w;vLu>Yy JFr`$9GŒ¹dhr= ,XxBS#%rhab,}2 G^EluDE$&1R iス нߔlYQzV$2//|7dž,*P// @ MIHhO~?zQ>212ABawZfށw\?ǒe8K􇟯?N"|6H^>ˌ%''nUi_Tjmz:ͅu%Ho_r6FϘ4W,H\WH1ԏ{?{2x;GwS9p|)' ?~T矾]J́ !NK-Xu72GzSr\_tyAlKSf";Hg;Fg>-rf#w {ZpQFqLFc<*h}HsZh{2Y>y.Z&]mvs!q7M.ͩʉ4ǻɉc>-ptVyr(_jjo;}bQ\@6 ozL\ zD9_dhVp7Zwݔbke. sK={u,]e2)C'?oVNnێ>CWfvwțz{4b+] +].mt_vqݦY!F_I-V NWE~dB]Si*; BlkWb^I8@ލZ if;>rEyt̋mc^\}#3 QYTQ:0 `Xvs,ۤJ!a>aSuUvp丠= S)h8=hpNl> ;\_/ 1ьc(-UR䞦m,$E \da2} e w)q!X`h@uKI(449"P;uVs]4M9M%0"ɠ3KNK(W.+.p >RSm"I/_P͖uuvN9X9efj 7 o)16%SdVX(RtYEnGQ[U/헫p\:q#CP,STK &s . 9%1hF+di-< BrMǎHȧ4>Vu^b*!@OlXB{EC|*܆)MV"ݍo.jDHc4%wY$сdC  pEH!-U%M'9mj ;fБPuo0:BG\_\Ul_MC= t_}~rv!M⚊}?d4zm~.uS?~Z,VUy%;/y+Eݒpm.P l2QX'MiTT ,"`wwE6Ks"fi%Vd,i{$"޴BE!Q|.Zz`.% FMA2Z[pȓ!& \e- 96pGpĽq}&L&{߇O>}V7!-:YgzTQc6iAxE_\`fwd"t.殐M0 ޽)xXzbzCkK,ۆVqҒ)Uw/n~75Ya:,-%M tyQm{#C8<-IG(LU#۱JZ@Bj\`JyeF9g6iOYGHNDBgj"e )R=,A Bg.P "9+saDj;.&_o]ܧ7OF|f_F1^m'y%=/9QQɐJ*3/)Q(PG"BK!g+:X[:dKN9bz=lyj̧ yck 1֟I6j}4_sҾ3vztI €i%eq"8Tɤ< R)u27H{"w^(uQFSxf& .Uu}3raL-Ǿh:FD3  8^ɀ x@)@3j=A|HT,ZIrR98*AZtQU1uPBڤ4Q 2ALI3T\װcD쌜)Y\}uv%{"#pqōp@q{> Ԗk8 F*9(4Y Iažag-﫼*pV[ SOTauApC~~x9LD:?*]' .\IeF6JDUgJ|[AZ~-QrҍtBWNDos~!4BSUD8X?zczNsAY\TS%9>șP!&0^@dd% O>!cӬ6>3d>biȏE͙5?LL5Z{M97$͘Fg 5xA՚*Rp\H$=4qUґEԀ6a ̙qGD(\yIU:Q#< jb]PDA*bw` Y ˀUBQfngv1?Bπ<_P4'"'$D?꫁` Wy3Ƶ`2Dy4[)p~fx%-{jg ҥdBrQ.BVV}WhWhѲfw?0{ E]kv@lDv$/uy7v,|Ґ)g$ \kb3TR>i:Ej>=Yw T;L]N'MV/xt]FSdVT:woQo0ƋRig\WUǣW"@tOJnP8qVݚQޖǤ\!,syq=iuf׶,2jTp99:\D.2N7^ .ћuEU+ο/yt<,B9/FMw%2_| !MZ7Y\0̫лr>;?nZJ/LW~tM劢I]XT[.MmAMe%Rm˺1=_^PGn\_;G7@q:d5u2I8RyqLNxN|HHzA/s`HǕ!/g;[XR_sd4#Ol6)'xTEGUc5'ΚZ_h[ǯo_T6aOvdU|2Y@E@Koljɜy  +O?d0b&a_;kp3h/|S~Fo Nܻ˄P6~ՃB>:6\FJǟߟaԚGr}l07rAgXY rI^:Q}w:dA-*Ia7O 崼Xx sԓ%biѮ"*%DO?~vQ7P wp.ǹ^ f4KHu0ѐ,n]@ӣuyj#XTNh >p Ii/9cQriA鍡A :.0&hRH>u77gF~`'/bE ^G,pچ: 6QeS5l PF{yWkjkzY檂T,ȿK/fz`*L^|N6$Xcj®"njCj齩 \M|;ھSDC.q6ccō ء:N%`FJ7DiO}N>3O\(X+ ktENL<n$K\ ;"CEq-TE\5uyȅƉwϹIQ>Q[%,lќle˔Lw-i!i"mՊ!?]JږrGwβ˝V'Qi%P,UIq5JM%%HJ* A)9܊q^[Q*0%OAhtu,PD2VȀPBe^ZF^wX1T>R^aT->$L=QS ^F%IN8GG[,Q9 ϱb?ִSV=J &W$ʒCl)!k-CN ۅW PG!a3G%TX (m"<@GUY&k)̤=9wx4 zQe~ȶ#z!Gz*[- 5<u|.sDRIDP/}h-[ ͍ahûf·K[Jd8vGSբ(Sޖ[S<&4q/>yHN1JRpg!s}m'*RFPhL2F57!c OnlX˸Z=dpXB(ԂqLYXNFr>j9w ޱFZ<&`N'>^'JkGt>`߽7xvzyt(F޺yF ?ρ|%e(\ޮvl9nbgJS@NgpE`IXd@C3|9x /5\XP֙KwV71]Ha=n&;CXJuRp. vpf1vդ[;݈c}c79ƶG/-G݌GdœZ4q~kapʖ=bC9s3viiYdZܬǟ<5_f떇wJqXIU_&8g$@e*o\Ѡwmq$W ?Oy/~0Oxwl1I3'i[dJ"0C],Fe圪Ǭ=l 7_IbYy'3,o/9];s)fd|Mb/vSP%egM z*8]2tu)G|?)ͪJXBZ` ,izP'ytZ_o7zZJQ+ɏ6" 5Quj.R7#i"d\FMk۫νdsrŇݤ}XhS̗މrkԻs\>tVd@ct-PrMSnJp-ʵKL!q1^Û1c6 :Ī/{ GFJPaJ^ ڪ]e؀`?FZ6ie\/$;ٳZd0X36{lfѳR۸*r4;+.ewW5w9&m:gQzhXƴ`/+~7\BCaF٤<wCP^; lJE5b C~@rCtAv0 )TPA>rꀱol2${v +EkMj \Dѡ}+n0Fhc$i4rQBFS֫^u#E*smT1\ 1m5lle~E:R7kh*HMKi-Ŷ3 *W=ҵ-9VΏ}{&&#Y^zհX* Y;0m@ "V&#-.o0E zȕ{!yZ)Q 2^`pr .FlE*6Kc`ȅQ HhRvbFRCl^O]XvtQUΈ*\t5aIޮXxC~R@kF\ˇ^ryyhH azq HKd]AwmOLUlZ {AcMg]Kݦ*,G[Y fwq{Yw.4%,g^S4ڇOW,gooךi/onxkcv} =ڀuE{\<ܮ/bES巭?ݴ3-R@$Eܣwc;(8YS_toY{ x4rϴQ3MrK/ >9޳أ/k ژ󧃻M|DR?oO_LS= XV,Kc5kg!"w>㛗;YlS988FݬJN K>bcRg#Z0sN5n|vͤIS?r<r քj4Yp!֫ >%7[j9hعP E>;h|!ku5|J)fFO5 jI'}7Kݑ+,%Ju U]^T hP!gy66m[vɎ1>L,gR51y'㙺{M*cm;2.[O K 3 Yh3=m|Xpo^J s|n|'GE|fEYEwc4ؤƂ`FO+1.<—W!O?8:VoesȲ;I_7^>p8({s7Oz]zWOI3.aOn7ih 7&dgaWFv~io ;Uvzuvxo] {svhY9x^½5W/&FPVe)EuxVznOk;`׉X'F <}.}RS~ѳz}eVyf8smNtf&[%_:5B9 K1<=͍s9YżMlzI/|g^R`?-"ه?E;曛(7J3qw7ϦI9,8 uz?zc5h*n"8Wbj(##Sq*O?TܟSq*O?TܟSq*O?TܟSq*O?TܟSq*O?TܟSq*O?o 92hZ N A\!hK{J8BH ATD)ơ[GY_]q\Q\5M_{8G$ޔc$gD쐁[.zF@57^I˚ҥ =>pz9ǡE_}jŇ"'xT{sڰ凋-W}nHq)=ijB5#/7WDxo:R&\jcw6]FnE!Gf2Z؍|;dT:JS`*Lt0T:JS`*Lt0T:JS`*Lt0T:JS`*Lt0T:JS`*Lt0T:JS`*Lt*8U|MJN+_ҁFi +8 5~Dz+{l,7{U/hȋ:1Zg`B~x©W-AzcE%㖗o߱y7L⧻u˯}Yj{svugvz [[nKNXO"'vOX)`7EL\=P<-)ӒPN<+T2E6/qH.wE,:z*gy6!݁] VR"YBx_8tQ:`^xsx 7hk^|p5Z.'5pCŬte?}~-MwL."]h[RUsͷ+7ZַuK`}gz0liKݬE4寺w?\qAљkʒ 娮0nj`R'LF\.N:z}'ܿS74_'S X4LF0O6S';nuۮzKw!CIz0Tf&#H5DyߥeH5NvzNN γf2 &ϯL2N/ k5ĀȡC¯5ċlXmsVߑeHXne~t)O[vp)1h9;MD)^47FKpj^&un.j&` :zq1pj0įtZ{? 7"ٟG{1XH,5b1W-LWHs+A{^ 濏.._d+_^7jEu> OlВ,*Y!8 Rdx>+Oc흺vfI6ӽtAqyA2" ?<de,hC=[Ny'Z0]!`c+&p9^uڻ+_~Plglǥ5@8y,Bx6GqJ xI5 J<\RyӖk[ K} 1C$:Kpd5Foip(_%{M\6 n `K% \B֦{?_r2IK\Em_}?M[~rsnȈd䡥:ap<ߴKY q}~bYGG[=RR(ƌ;z/}Yzb$=$e3^N Q䯩2FzMX/Jg)9QȤ 4hK`I<>yP1 q->ng_G)G ?A9$ӧC " _^.dZͧ Uk'Iyiݹ\//AVq1#habPGFZU r՘_2 7G8P ѬQ8pQ{w9=(x57~8;3J:lC֎C?w=wmC nĻip5Z.'2෥-I`zyלolx{<1 ٽ+Â]D!"-P(FBx`7_Z1/|eT:D@nx4Uv,n=qkS_F`ayn5m _ccAc98O^Me]ÛvvhV+x}-x`4eƼw  vmǑu lw1*.b[q2JMt=}o~0rök1kjeGmSwLi*͒@JkȍZR=fiyoM˚tT߆jX骫NrQ -KJ@S ,Acx0?k4BY{ꬭ (.VSKVڛo}ZyϋE^[S0gw!R>[/<ݾPHS +m=.f\; ) (qE5,K}$umvݲ {W1# ""xQ^UAs3.Jq TR౴Vi=5O:0%oRo7žw \+z* خ$fCwbA1f0lѡԑR% %"\R?E=P*Ns2&Ͽfّb7 i+Q][.o?Yb%mv  ڤss%RW^Go]_G{.[NĀܢߔ.R˭ Yn=E=h ,GF6^(^veߒ Fc,**HYJ ӌDǃGUXvBT [ -3%J n`ZJĊSS*傩2,:#5(Os/)tgTY|N}ZQ# f ͮp~ \p=9﹗x6FS[Q(.H.KU!m`RPM/pRZ#K=!v[RYa>URuE.Vzgee<3UtTkƤWuQ<odXpB͒6d4Kw,V2[mT=3(%A?͌798B 'YNeRR1)蜤~HΉ/CkHۘ$Cp "BJCR=943Ją'x$ǁIpBpuH]EhyGl(?]Y/ˤ,1h~E[W V%IG 8~Զ&ϟ", ]s!ǔ_^-֚wM(^'qV 3/-(( 2㒉3v)0i\8MwmAVef30dDmYu37]@;p\47G,$L] , 1*owOys8=q(6][V@⹮yΩǧ}w0 vgsu^qe#cMy[a4ˣWfԺcX#S۬G/~;MϗI'N ]].WXBq¸LCEЬt]. &ZuW*yTC?GOG0>;!#?wo~i9~w??{Yb:,b?Y$1ON}][]koյtj݀3U} '4Qـqdnc۪=[/΄bc$ )WF2<%m)b212^şP>䐚7%K?8_5Nj$A:yYjC*Sb+'CiY81P2&1zrYJf.s!e:KU:#73{I8y=Sw!EJ:a1k6’J@B>A)KMft%P7Ͱb֋¢F:Ȁ:;I5/PE2+Qu2 g6,?4%O.PMOp+yRL-`5A׷=-N:hS?k%3n7]Rgjonl3Tޣ4>&ʩPXQ >d^kbriY܌֛c9ƪh)Q2Υ, ApMG& dkE ٍ|jXXM32  g[\pHˋm|:yC&x;+&D0 Y#"<AT|!(_AWQ itJ'a(^H6`T4*Y1P$/Sf`]ZF8k!f_Pvڃ{/3i%-KMV(J4'/TRL qT39F\QL!WESh k20C#>,Bedm-8 q ҏ"n@= P<H^GQPKF$YDr@R@ ړsK1Iux+{dZZI\&D.8SG^ n%:2XԢٍO^u$\7qi<~xU@N%ƤH(Z1$9 N&%W4y0sb_va_ۚ_xΑ{36kN(pwgb?JqJWSk)Y8IJ0JqJ魩_ ja\{)/&(DCB yÜ>HKїr2G8ڦ'MN'\U"-\'_<'"$c"/ǘEBH]T"!6;nh'sL<u3f 寺NԧÐ$Cc2[V#椏g}N:u֢| 6i֗[_˲'w/쉘yYwhy`2ܳt(UX(weD ,PtS` I?"hc46@UF hcTٺonU$YI0_NyRh2N)&&m$09(v҄jiLO`{;[8td~'\]Ztk.]my-3ЬS/x20ćH&h[Q}5( AJOBᘎW\)5^bvvVJpMZW;U^u_B S}#]q>uٿ8o׭~*0!Tmfu[:}7/ule؈6iOi0_βu(,&[PP8*u3jKl).*յ"ֺVw<#{zc/n'hg9HG]O f5~{Wxx_t'?fg"sBp7x,Ϭ}ZRsA30srNY.ԼNɽG.v|G7y'])Cg;:@ev[螴v-gT<]BIUu6`ݩr-/BXVjRӍjmyn4l2lQi .C.^cL*)!pQR͙Lag !1*mU £C]YfoxYVfCGpC:S"åQ/u2[Vl&aU(ÇIV֡2c| "lJ%+,gFpZiiz^QE+׋V2GϭS"`Ur^C#"`QK ܺRJG8hrr`wVOƇ9Ys}ӹ_JhdI8& %SJ]J+PXL!SPǐ >Kkr1/: 0h~E8e]?SzJB*Bj9ꠝa7jӱ <+ˣNPǤ75IEl\yt;ө- "S6dec]:^XZ6ȥAK1%}uw3-g-jLơ#B7<'OHhYGU8k4yTxm-q&ɷc giw.[-_U9NVEâ3d.|vP uP3+̘C%fDHZ DQ .;:́GC9Xp"(mRu3̳OX.:)s$x*jB88M@X'| ؚWႮKUpq0 R U QU"S#hO5f̼hٷ{~ſS+_X9a\)*j*#clZ1v5085֕:%keAIF8#[!c=ILqbNYS1W9|k8n㮹7|\ZD[ HJuM#ڬYokA!:6a2.}t}Ly |vObo1Gzh:x|sy{A_YU?]7W`|ʴ6Th|kU+k@tc2 rI{Ab1uj)Km 8LRGBm hR\%T<+RH`pI /RǦ$I$SkmnY_9x09;.ȵ`sN>">_OϷA2ay=o`^}|ŢFWK17\f8.71Ύ/,qOY]uzzyduXUy޺Zl}, z5* |^ur8MPܫLWS|Wfڥ.c'Ai @Aى$SLv"uDR=;U*;f'0;(aQ+(W,D)bstjqE'TIk?)˕\Z0Unִq5\W[W$]܍mվEtbq嵔V+˵VcϮXq vz8GU//4I\qqO]S|^W{w=z(xu Gɗŗ\uǯޟM~xڽ]|~ө`" Г o_~2,6mj^P,JJ>L34@An-TEPљd8uW4۫ dZ@i2J{Mܽoj;N>rJ-$5m}f6O?__oɥ+Z^ot5NSiOc 䪡yx5Pc׍?jY P]_2Skh1mm]ZTZ4U 0^m}o@ vWj'd1N/*7\ӫ@ )*WTRpju1:D=j@  \\_ HWUHq A+W$Q+kM)"Fq*q5@\!x/MAb׬rW596tt3ј_5fL9PbFά6 $}*Tmreӏ"S_uhVTZXAi+%mS+ gA`|1N!EYSj]N!BNBef|"ږ*^E\Zs1"5t9  >zg?ΙV+gʍG\ Wg _R0H%bpr+R$+VGjB( W,bpr]1 =Z=~*5 UA"bpe} j=#+U( 6`cAp,?? 7"^%]chqE*7gd}A"˙rі+R9)S\jz̎]oNJ\ ȊҞ ER=85gOX*bFc益Θ^^NJ:2)TLg5%g ɪ_~:673 9O5Yʛ]T]G#'>.Ќq zl<͗}0hf>,|%쾠n4_o&_5Ƀ{,!|NKs9}*p_SW ƿ> ;@o];w[d2e+.o;/_7 )M.{VyW]D5:Ϳ;]rcoz)es}9C (;ybVk=I<ˊ~ͱsuG^Z#OenVMԌ]/]Jݲ~z\$+dYWܝ:}[ǂ\f/RR?VMw  Z+ \ZUz=jLI +{,^jX^vqWH,W Z>df+#S%dJG\ WV)+eAbZW H"w\J9CĕSmIޕSF@95\Z}0xڈ!+ VJzvł+W,G^ !*qThFWBzK*U X-d+Vi숫J(a  6\c;VkWaq`ۣ' ZV别 +W$XC9bƔ+VUB~?G\ WXX+W${~j!{\J3jh4;m-ǻ"qe F\ W%t<\\[wEji+VYCĕ7 +yW,RpEj>dfwTP(dBNIH&i7qjXŠߚЧVfd/J;Wc/zO=|^* Ӯ݈}皾xL2kJ̸ivh̦ɱvG0)NYi)#N\,c^J/ S()VU7W$WI,W^zAT#+mrXHv\\_wEjAab0zWC8Pwł*WЭ!\ XRUV}ի pł.W$b+V"w\J;j2Ω l\%(2~qe4XH%iymۨ_VHQ+˂ʒ\)e),ݕeV]Y-<(_X0bp-&fV+VF\ W ,+++m)bZ+V[׈սyuCJyb'/.Dţ6PӺ'op޽[^K2̨ioVlYݯcP}Vٶ 6:pq1Mz)nt7 f:#cYohҟ֗]֛h/u ס>j3;v-A}uWCz}t_[K|2_{h$|$S\Tb4w_ほz {_f]#BQC@r {˲ei8v<̟#_oNDgщMF^r>󼈻3Rj7"?,AkM8DFdF[4 +,yo~[盥$EwO,?tᆅ/fPGMgU~]Ʌ`Td-i<$FKjX DQ$z%@6]{5zCV%ZFh#*!FՍV   ?nŖHNѥxbgBM4`S+2ϕ]oAVDiDHMMm@DmU IC)B.:Sjtlj /TԌV궑1٪.kk\mԶu6E-بI#yR,xLB$&sML*e$Y3-ӵѱVmD@B Hηu AXß=C-ɘRɲdG%vCj]koG+?ib/KvAխ5'gHb5X_evxֽT׹+ oR!TR2熩BXt !SDcUYJk7'%W"#!^@?F] (ydоJ2Qe|H$Y6r#X /ψ1'B&O'Su.7GM}Uf*RmZ8'*R %Y0ۢDsAE`CoaxHQچM(%|'TIFK)$qu$~0Ja i$Ҙu裝X2!!x@D_xH$apZJU6z\. DQ̨XOI=i^$Mŧ chhUST:8RӾD+7%}^$\* @ #+l )D!; x{R]J y;"T#H2*'@nj ָh)4"VPT@t[ xiЮltG*A5Z,P|պJJ|Y@A_4 aU#;k?Z}p.i8 в_nY EľcLDr|1&(2zAa9I "M 4|Zз5t=) QAӮV=Vцw!4$M Q +  `::pĥ8 (}t,}W2Mt\i!i4*UF@>qL !'qc2 ЃZrF<@V 2"2M .kd% ,׷Xkp`DPHc ޣ.O9>H/&9 HTSt]_>0&!Fu w14K pKѧFk$Dǎn<L>^Y%k:h NmBjvioXp|3 7e#Ko}]qcm\g 0SD(͎bdT qwq`DYUjQ`*L BȲ$@3AV ?F426d|nl?N:1Mh%(u-A(5&mVDE,6j,e4 HX6 ت[e0 R F;yyo9_hZ͆3<]Ǵb^1uflWin=fNN"i`$46t f ;;MAIPH^bX5(Ur$RVK ]rq0ٰ[N 3bK*%<%2`rh[R|F s YRXz`-Bn7lE^QH"NRe4in`' *I?P"Lj0R\d$iaab"Z`pOK6"+*E :csjo6iMXYq֟@ZpIPAR)HU3e2 `2Rf*v!;psr|dAV Q-SSfڊYcTqڂi#bǍ>NJNae M̀|beRX2'I¨b8e?K̤Q `~;"i8n!X7Ϧ2iĪf4n"|9P&d,<¬[M`$1DN B &!K#xu?];S9G^ W=(l3Djo~oFvMPb@ym_b8k?&ξo!قlwHe?Sͯζ]zy#f׳9ozfnڸ8{Xzuq~.)G^D?c7/X֗嬽;,1tqM Sʿ볷r>n:_m0wNjm!eBI*|NJ =% Fgݭ+TBsT*J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%2np|F$y>o\' n1_a|-o t> c6^#^媻%.n ~=|q6-|͞&VmHR[)A{#DJ^|?}y ]wsbq}OJܽmYA)݆~]@ f\NRP@MYmW7U]D)GO-Wfk/u5MsӋͿ_b #7,KXƒ7%o,ycKXƒ7%o,ycKXƒ7%o,ycKXƒ7%o,ycKXƒ7%o,ycKXƒ7%o aЈ$yb&5-g#y[K`,y%ohAݞh7}|n@b{oO]5:ݦ7ϗ!A0\'S猿~1բlFD_ /<,97 v}MkyskxNH݈ᄁx,77b7 aCkhaDGDޘB0V;՟CX+uSTIRUD?&dFjoӝ5样؜>}^v2bo b"h]JC{>ZmbsExdw]~W~_GP..\}ZWWizyNqa}\".{eؼ6]7_-JH}?QMD7W<+pR?b/ߛFmNUK)Zn9E&q3]3I$.֘;1V?QU9m\UqU9*U帪Wr\UqU9*U帪Wr\UqU9*U帪Wr\UqU9*U帪Wr\UqU9*U帪Wr\UqU9*U帪SUJ?XUzًop@o6-龛|0Wg#&JgYh9 .~7 uFAG?>Ic`;?$+"k3?^G DCFu"fgo`[gsRڬa}}?̸I<.^`S{kނC}jCOOiŅ;AM$/9JtReJd~\bYo̮2%*tiQeeI*DFMH))H{/pBх]921@˵>Q BS$ҳACɊM68Qj0`,!iv)Gh'ǃ>su?sS#4Qռ\0>gI{[9+e>AqFFwuD?IJz T]$EQMx$gzzjz:rL(4Z6N%3>N:j5ayUiУ)&{wK_5ٗte ہ< `)0qJ6˪ЀP4Xk' tQW \:)ax&[ZOi/]X-wPz!ة>o!?C`:2gP3OlJeP79KR[n(`:&RtAhuK!y4}3#>a*:ewR$-dRr'h:gU<FKyB\M@\u~UػD'aVA:J9JI $%Dc2g%!Zm5FrD!tiX ʽEfz31 g"-Aj䜁# 9),Zz0"R&/< :;" qrǦgܤMVq nqu7r:ŴR^Cp1;.c @f`Vi ^g}:F)au76w ^oS,ȲYRas'~پߧXÎ\K 7JdN߂nUّ#͑ۑH~nC? Y3!(yY$)I-W"q2mvܢNfguv!V(ҽg%} qN0݂2sx_;nv8f`̗1ޖ`2ܳt aY(M"qpc c$鏁H;%=hc4)PJ"W^1*lUY~O%0J9OJ80t"RڍC1q{Dϳ(*zJ<\tt^Pݵ}B![jxyMIBN?8^5C|H*0.K&H{->7Jdi1:g4]ld=h#A%$EJ\%9(Ql38OSG;8}c>dS*Hf}>:-)̞(:UGL23#ߘЩGGXaለ%U8e)նii`YgYI_ /ow?g}p60s%7IEUZdXb 癅-d'bD[:4 bs>ߧ81=B k-XJWtxD$Nvۜ"0w%n0'*1Cl9\ f.b~amkk>n;sLZXt Х=QWMGof_IY/{Guޟҵ-'~S]c]]粭ܓKN1ٳ?nxդv:,|ثamUɻ^H/wx )~l_0ш6( ? %NW|OL/C9V.3A֛f؟mjC__޿tRb?kxLàU\=o]wJWSNn@6ՕEխf}:Hc>m"sܜԧޏGAſ6%637 /7p˯)s5^v8RcA ɡ9ᵪVcK8!1H^45LgxAZiO ^qw]@0>NÆ'?pOD):~e 4zZw=1Ibקs4qvb͖- );u]wD /A X(gԶn+V%L hxƱN2d1 EwٙLVn+Szzd7ETڂ 0 _NyvfEfg%Z ݇(;),N"d:W |޴F IuWÍ{ncnG>s/)PPQ/$H!HV %0҂&*HRif]%S72j#eXnpJFARj4 ^$ YRujhXiu]?ܛ1\yVz13§ĔLƒQDM6 ז |1j Y?tTZ]~ݓ@XC&2N6qTv^li  Xe cP%9 8ߡӯ'] %755"d4+5YLEG?#f*gdСfhڅQzcP y2YhC-`h%ix/.!gADY΢ )q`%}{$ŁIpB w_zS<4 ;3dj^ӂrt`}p&`_R6wz L0#3H]Uԧރ^3-3(qɘ[#%pS |8taPU|$iJodDmYku37B3:64l*>f=nwޛ׹{Q[urUr>ou\i&#aIpʫK0W!K<5*~/̑y-ˏ/Uoʞr.k,i8vHh6}dR=h.UUE[ޠ\ ~w'~돧/{}JK?ON_ѼLIS<\N2O&''?ߴ]5M͚k4m^Ov9.oN@ox#f%*3(|MU[^ܞH}P SUJR92c0 1'@e xԠXf:1!t[LW.u),/WmF2fdo%/1]GDBBoSP {؍S&ߟ  sW1I4dΚU>]>j]loQ:WGĭ2,՘s=פ YBmNίS|-:/Bh-'I/qκ(Zg Ɠ |t,m(NɥdoM)Qcp[,F3YpVzrO:;#gMM|so_c g?>e!ZgуVI,&م 1SBR2h2+ $ ېd6@%/br"Гh!Kf&t,󰠦%9T="gUW۲zdt՜Xw$_ONGLK_`MMW߽+:E5 2a}SKBLRYZ.JR˦9isem Cetʌt L0Rl.E3:$K$E;#adNW ;MPwjɼh+rv_ä?^=  GL"Jxfi,$ 9(`C[AL dT8HP=tPF-lԙl;flHDڢ>ez+rFl?s[P3{kcIr&yVҌԲQ$V 8$'-.)sS&s;L Hd!fȑh25aML,( T}.#Q5I;1vFxZ`W)"qɒ()c-PPkJD+O* 6HK5ٸ*vX ђf9D"CZIR\C2!%-PCӝ0"~8K4՞pq<+MθdC\c\{\}/$I=$8>$e5@Jp$B4d&=.~ \<;{4~{2m[mY#Jg;|Ա=Fќv\~4HmvU!X)AՐkwUm/AU/Anv fw*Y4L]BTWJ-pI%>\=`y{Vz$ IŠ pl}jFSkcp]T7%SVю1| f&bX1~= 'F4}F֜*n2Ruu՚rIQvn܀ͮvH zfҪbV#.:A=Og}H=v0Oywlo4g`N!bz2G; "47$S$vUI]]\зilLXh g~׫NF0G#I V)ӹ'F-y"ju0F8@'ERODT%Qz wE-ZJ/Q {Tz$;WDW\*jv*TZWW]+"Q \r"j%cWJp*Ź!";Wgް}$jB{JK S6U^DpzpqaK|PNrv 뾜hs^?!Yj|Tby },+Sf60I9&ʦdR+/5V(RJ)Rͨ~`oSDT|A?nfڜZ.ХYu2r"t|Lw{Xs+E%4>;>{mg #/q\vɍYO2)<;`4+oӤ".xfVYU[s-.u];.ڨɛ'ѽ]w.V/;eKiԛՁUn2J[Я'j|d90V_>:egU }ejf"Ā~h:&+c#[F(,oz0@*YT@&RLMaU|Ԇ.(H'[56x7~,/+ y?-ؼ¼ l~pɍ|vBfwnA&g?8p$8篅vV-Yh+}=a|MKAE \8#)[/ͼѭߧG>tSޙrRtI[&խ} } -Q*e!d9JIIIye-GK\'igiz BFQ)PhSn|L.CP̅U¡H@&2Y™َctFΚ Fޅb4xE 6&> Gr]zDvzRFC䬽g}U/M2FE&MR.J^eb_bSF!kw""褴FzZcw*rsV%˹qIVg%> LNk- Q:bg<,K/@G$H|}mgk_ᄐ0m6QjboY>ebnW=nI}wmԤ*apkP@֥B ')ImOc[ U3[T)lm [ h,2'%3 H2J1=A"iaro[~ᶥЛۖz}9-K\~\>+S_#7)$u&`dMED%CrDMƥּ3)c/Ɍ9i$A$#k%P]zg<,s9.uZ E:tJ2&VMx.Գݯr3 -{lW% Sۮ3 ,J6ͯud;5GRȲSTh-q9 #{Ogűc;ҷku;gO`ldXP4x-o -\ffJ;KYd$1 >Q,pп yor_FouU꫆iqq慄%QMP sUjT]4C~r_ơy\J5~~ze*L w> ̂HB9U ,5˽)WXPYP۴S_A-fG>Ih1>$Vr;DL WkӷM<9MeZJp2c1,n?3EQyFD[OUM 4rͅ8Pi(<'4R6AhmQdJFUE@H1 N9ťoW12#c΀vJ#c1r6#c9]]PBa^pNQzEҌWj s;׆^>Mڜ ߟ'#CA@ *)E2jgN Hvrp?hd0 3 j-$& fh/#vHHQG)r6#y(];eڝ{BF˕Z!K o(0j P+_#(FY1 4CQ ('@q v"26,@ *HS:01r6a6g!0ws)UaDT"v6n,!@ kp5A^9TGu\c 6eviJg ZQzHL8cu A94a#֕Q͈4T[I !uq¸;\pqL pXKp B!(( D)f4i>#]Cbq xضwk</,okq2 y*?\TٍI= ~| G*$*ѯ_-ʘc472װ`xgxM%C[RvN{rz~FU:\Tb5VzR!~ઁc=8y: 1^S,F)ae[@2eTX)1Kզʋ!.ѻ`&g}>q4[:vIsu}R#5}"Nd*emRbaDt22(*nR`dR(=[6bXd \H-l-80g1` ZF,L&Xo !|gSH&7;b@s[57&%\Am-=OxRhA:X2h -wA{Wxz;Y =|nȃ)B3 `ju*$]P8-!T2yJ-8K|v񬩳FG/& zKU"P>0=2Q3K]pࠔ0 蒑/Dg蔓L6w@BjBf?J(IDԭҖ]n&* t ('C.L?sLhBČF-<ҎAa5'ВnR6~KXר׺ԾIg/%&=93)v}܌~`?I0;5yg8~]{,f-8皋B\xwGN:'o^`曵g`52  I<- ߞg2P{dY΍4p:tó0]4:?&*ç=bQ(0J)*.z(KMJ©DtT8I91k\C Gî[pTFF [A(x+,qXic(G##ib_.%tm}PIc^V/g vu{%N;g-/\pM{'C~7x~\)Wijm|CíF{:Xszܜ5}zyAXf˭6Tp=[=Iy(00/=֟2OsW3S TW/f"PD$tP{ܦ.n2~ŷwVR; x'*^H0 2\s/^Rlv\!gl~k⯾n-W G2<`2A2 m^Wm6 χ뙟2^a7^(Y$4kK^f5+b<4ۃ/*:fw*%k+bn>/ƗIUkWa9'VAU)}Dfzkj83yE(0bSH(YbHED;ب!IdR+a\ ޔMd!ceqc;ؒt,a :aFhNP)JM˟A4hf.<ͺO[[h_zVpΪ.aUVU&[W8wØך2uN`2b1ۚ1jÈzES[;ؿGu .RtdEQ&x㲛>w~}\_ЇWLQɧz$g_ A}ŭ{ޟWWJZʄ2a^!X&HSE57V"FML!u[Dܻr$y|i ۅO) 0&$\J\܄]njb1MmDXٟ*zZ>yރWk9 3vMq41tsV!=9(KὐG2P$c /tdf6@J'0\0PҠmPV# $0YDS37ŋYJz0;s40X8QVB"oyW Ỗ]mGT]V) EqV11b101DITS6lvRNrIE VE$*`BFi2UMo {o 9BY^T &FDGF4ʤ+|[ܶM"i`>dfl,Yr$y^WlmmIv;nfUWz0Px;%m=Br9Z1V`QKGW Wܧ*E1hi7B *ySa:4VO'zz>紪OK5̀ICO `2BZ/e\(x ZamN ~ɌB^HC,g3$ 49 RBnG#c 4!%,3{VamZ;zmd K<#ѻ%ZEk ʴcJ[sT=T5" jXU!xx@T!0d@V o3;AoAvGx2ڏ&`kqV]J$\o[fGB4JSišT-ZFJ9nV}cFR?}RPΡR|ǀv,\S'*RI.ZL٭?'lL&Lr%D\ɉ8-"W5JH Y0*du&]902w+k-q*%1c`Jǘ=*c1,tXIЁ,5̲IȕW0F^`F'9 xP<":mA[/-S2)_=3Fohn #a7q^ v$+~IWv^FeR_SF p3YXJd!iIC@$K "HM&HB34^MBY͉AC=kO`_PPMU::\kÐSM&JN^x#t95.1"3\"EUyrc~1e6edHg-A0tƜs .DzŜGAH=A %a_떵7ᄼuh8q[e4^0y.RXMVk)ehpmF]b}E+1©dt=3͓y)uM4w [tki`"ЖgaupЫ̜W׋ԙ݄M7 ݧNI:/R3Lu%Ngk=G߽ϯ[)PhUe>{ZxM<"~6cf 6tzVBA*%gT:n©:`|GUFy)sV#7ѣ#'iU2W20[y-A,%|dZL:0Z:\tgR+kRS0$L%dq&CT1&́qQ{~&g->N|1 ,? >G Y ` t!rՋZ[Vkﰀ [Xv`.sz6Ue-3[r6yGkخMvٶ?%'&<i|+.HJ&MFn]^Y\M-+ -@".ϹkڦR'|֭{Ooضy넫=]Qn bJ) BȾ6G2XBΙIIri` A"Re2pcr f., 'Er7a`ƹlÅ#ޅ/_i>\~qwdo2uzCSBb|B̗>s|xz+*ib12lJrbC$("R(O{s`,f[rk1YZ]L*8, Bk~S<,+q+A?j>;XzYgmQƦ {ֲܑe,.7ڟ-φ1 0e]!WEl\N(wԍIy7&oVأnLƴ6@tr:9eL%2hB0_X#k0bn;4afMB/S^Z, m:O꣑ό 3Q9@ f5*̈́eę C}46 4&6Bm7 2+U ZJē#AHH(h=U i<8R횱ZX1@Ĝɴ,]6Xżc1H٪t2L,o1[?&fx0! SUJKNczǟqaab^-A5p}0&u?''C@/.9HǔN:HkNm78έ&3,OGܐ!P2<ƁkvtC8raݯG)Ե $qUnX.1^I+53ҋ gSgRo4YbjEL_;׮V""-6?99]]/)(<.r[S+u$TA 7O\gsUH4ݝŵF.?໛"0K ¤8KM̕r; ΑNfT^vSdlHX;TGZ7X?̸azZ?,o^Ec^^]=QGNrݨr>uXi*#e`EN!NQWwcʇ|jf~,7_/yWyKN,Pa^$m6JQp5+&]Ɖkĩx $$_~8ʿ|ww?rF[_O~;D2"Mj$vk[pg迟jjho:4 49 x\dS^3njAOX#>L K]e~:SnM,j{[" EX%#0p66r{9!)Q[>U5!~ŝ SRmSdN7ڤ=M`(HP5!4"@f!-:eFЉ1il'P.EdlX%,3J7 sk YRbd0Nlq"ijICSP 5f|QU]tm D[Sbb"5W]Gr~~I^5 ޥ+!3UYY$!0%z,Yc.`0=dSǀ̯΃,4!-px*z>o??3k@1lTΩܒ)t|Fz:\PKI<L2Dœ >$A:yYj!)!C]봈,sqH(=W ,%o39c2x%*{Ĺ!7te(~3y )5c9m%ە QΕhFA,6ё CUCEŬ%uv>k^>% 8d$VH-#u2 8 _"͌Rhf%}+Az%pR}5AǷs'\vN:g'jol{۷p&|LҕPXQ"2k514R[v6&X*2gsJ4++s)K.z8mSQ嬂d8!Zj#c5q#c=_VӌCVBcQp#QYO<Z\5?U=79 w3G쬘r7Hg XQqQ|y¼G=!4%ɬ6`2F!EMi,JVd3q;|} 8;N jWӎQ`o c(Lde J QZdAs(",2w*frD"L!WĢ:Zš @%݇XBuCjW &~:#߿Z*1G6 4 Thyqs+#1݃#ĤAdL\_vQHR() S$$f-d gnŌժS(u)`}Հ9tw{QXVmѤ/C˶7uD̼;[4_ Ǽb[M0ThYv:*,Q&0=".Ը>0@ꏁ;h5Xo"&C*5 k+pn7*KS|[,$/rdX&&m$09(vфjqLK`;;;bh7>'>[4i6rQ3>oSx20ćH&h[Q4( A]7dH1G%#RqC^E 9pt ⟡|czS;lCFUb8ghPfv_@XGY_ŀWnNA3+I*ꬲR%âEU(,8&[NĈTs=<TEsII7avk*!%%prΟ8I%f~Жo#mFhQZJ ӈ_rbOȗpwcIe;m[,t1o7(C\ojo'f>"E`{8|]~ / ^+]bgo?qi'f8iK]j/geqi5N'⤋?bygr0B~.2,^9sɻ0T}9J>[߿q'O&W'oF4i&HهVmu˟r|BsY6n>xFhrNFr4%&x9?G^Ɍ|:[='T[5X}$yIl䴜'\ ]ⴏenhzEPY' WlXEsMϊ|qqunbsƘt]2de˳Q<}:=2oj(nu8'-$Gbt:|w6kUbMޖGJCw#N40ֳ>l޿L%x 3A VK T`7M.biM`Jÿ8)~\¨ɛX%+u7.嘄9m{6.br-XJ-4"OwOy\.ToI΄_;iv+ӛB7!_.a쌄N&??ܡ bpջ#󺸐eKZ-&5gѵk)sˍ3"@[$ZKN=s^s+]+t6: i7XjEZ嶮Ӯvܠը'k3Iw3Ռem/- ? o9њNNѺSw3v@GsέWN'/QͥOVlaU(i>X֡2oc| "lJ+Y<3όhX >Xs}\`N9znEyZJ(M5VY::A+ K .5;xuzـL>`W8YuIGɔRA %S2d"u =$!ʫ pj_tk捲}8SCRUeѥx%}9w]W幷Dh{DhzJ|jAz+{}.,'bn3dB%/){N O:u) `Rw:L##Wk]Hsڞ| c1> 5%+:hgX@ňL[r7Y9+φZw׹N\wA敵|ֵb6Z0Z8B2Aer4DEϘ Ts\Pe5qUC;_Ap Z_F$#bDε/Rr*TܥPI hEq3ѝ*p""bb"Yzs#0b.(_d5Bo,sEqɂ] !iv;JR :pHBtx@HyLnH`9H+0qRd$GƖo z& O)CΘHw𲲝UvՇ)(jtd@H,OXGeTcdOS8nK`!~>n4cqJUw?i=_HI좕0I "hkgJ}EMۈh]wm|4}Q@BE=:BWC\3.o|cma]vs]Wo?]Vt^N>\h>AR/l6/wH'<{VհʃLW __7O Hie݈:M g_qF]Zq5+NFE 7`g:A7pN6*E2?`Q>gK\.r{W\D!$ΗLYBe\̨P\S61 ]jͅ:!ϲf}VDfT6`#9"VQXIO?P-Xk%z⋝6]?;ar]_[n}'bjXi\y('[mg+8AK^njkD)Bc8>שwrC;A-xfhwNiiDoV  C2Ad sg#kT!LJ+]r~{ɬːl7>#3o>+7RR̴V Yd47[)%w*(Qs)@ouw}մ}v<]ph0jxW=D8*7-4w4`<Dc JR7^MTg0ǩuk ĉ)ԡ&F 1zCˤJpЦlHP 2nE FgᘋLٻrW>̈́ -Uto?/S555Ķܒ;KWP%ڬ#>I9DQ!~  cNEZ 1xJ3qZ#K㻇&4t?d۷g%-&Yۀߣufv/Mlpz a(~sg'BnmVI'CB+42,0R ~%7>_*|n{ظWZnׇhŻQ>yK5|L}tGsʦ'̮zKR5_47rs} lv~Mv]~@+e)6[c)\e]ϏӹIAn;Ut K$(&-b&4c?' mσtӻ (H(k*3ANox,miԿdSU'G*n/ \r:b\BTgH瘤3ZɒG靉/)@e[ϓ+ooŊ V˃vSv`f'K fN:ǫu_n'}zGDc@bYxGU &VWu)LDD;ףqB$Y9h̚H@Je(FUY]1+t;>bL+[yhG#yHd͌ԐqFhhٙ8qF:\q}b7|m}D"Jb(ոd6p\"IiPV\2.0i2UP;P (%3Y#"S %v[QaC^fꚩ+qsk!T%E3*&R7`6*墫Nt 8+CV{Wꔃ8iSϔ tbnN Iwkp֑8kƽĵ9`b30xc燶ӲAX{/HoVQ+,8.RjLa.]XD DRX\ѐfW, @y^L(*D GE֚,ƨZm툶 5%zLN'(vxݬ35oto%i` ݬ6/4Ⱦ:ux~::|8#ZT^L*DRR!N9>IT zAufwu<ät=+OpWjpW/v)T.~n@6},ŧ)Qb]II'W!oWìi?g̕RROxGf:P9UIqc>p+2+ ZOU~5+5.C'#t(q7_F@-G_?[ Pin35m~|712dFI?^VSjۣնaIQV[ t骓 =iX}ѶP%{?N'^RQ9N| gp. "tsś Q sL 㵒%u&w3';2xv~7+[gz"e\J'PңCC0X&L&G]չtiel]f`vM9@;0g a oOX`bOJ_JI@߳O '0,թhDžۘi5=vFf/Tq$H(Beo!zH,Nv'|`̺3|Nmu{qdC-81،J&gcm[Pk@ 8duz'm8klB#ѸX|,.z)RV6jB#usי8C{s&Gue7lj4A5EW>ʏfJ?Ko.Kә]]:NʏweG]m@RdC,%K-Oa-5TUbYҘ\ _Xs.+2Dua,W<ݻN,x@EJU1>mfeLKV3d+X!f; k@P/a󯤼bɬhRCʎZ6N)y;Φv0>oST~5li#Hl3c Gl>1{^gtFz[jָ4ooEz?}fǘ]VO;@c * U*dEb+) Q0@ΠVz!NZw]R2XRцSnGjɁ v΃@ ?MJ ǤWc)U` V e:j耪!FJ&s-(2aUٍz>/?ubšy:Wń2N턍8;/9QPڤ||UmT&, *ED,x=@6WAU9hT!Jh_W* tX:A`I9-Ufȍދ೵eA*#JNj `1L }x__sUe1NowStγk!k2oEmƛ9V^ $JJlCȘD%]X{EDpHBѐbi0nwނ>LQ76ٱM;8׊jPڽgߜ)!Pʞi|wD0>Ϳ0AUR"p_KU `3BI1z !$+6mC,T6E&f,RB)FW夋8̑cNEZ 1x-!c|ؙ8{r-Hzz JwT<.s3ogOQh xn=:\Rl 9y( 3!`gÆevGUxG>7<}*ڤ1N3!}h@h{׭ׯnT|ۻy7Os!R̵un=_z~l9=ܮhKw'ma<'#r `w)վl\֔-\5 LBp@ 2HM5@0VYT.NUzykr )%x!kؾSi瞎)Ƞ))mJ2jJ(+f3UA"f2ٴvRn[!2*t+PWBbt"MƷ=P#`)h)GUvKo֫Zo1,4.bdzT@ aq g10 k]0AvHΘZ\ɮ2׭k?o{~&w8w3P Pt(h^k z_kR1t(q:,sjs%p⇵l7;;>o깫MgA.zg; $dPS|pG\IGM_Wֆ=Ka:!hMt2{kZcA TM6Q*G_W,rOGW+ |iYljEK3Q>_Vٸ0>Vœ]VAFwEHW:B C~P?nWǥjnMŹ+bZ,Qۘ( zk2D UZuNXIp :BJUŞ}tsT;=;e\tHg{Ro{~}`+wߺ'tӫFK9jI}O]~9dy*hMŁXe4Ub^j-׋PeYI):Ej˧Qk1`S(btL8BH"m%wٮky+VBn<uH죫\-۷˺ Xxd㽃JALF[CDV5'QFpMvB㢜YC͙zP$B5DPը-SE7Nlރug8*dE?T3۵jf* 5"K*[WrIʇBNb"H"Iz.ieƘl4`QiRNXB)U׍pܺp9F4כx6ϥ[+ /O{ozھ595qx3*bft:7oߴx Vk i_\9AZ=Ѻyѣ|~4. (~t+|vO1CF4(w<h4qٛxտ'+R̲)9'7(RE}Xh3'&e$.4;NDgoώOo(w-eڦ,ru[&מq^~.ġ,|F{3ֿݸ٣͵&]TXe~׉ `/9N&u|tpx5hr2|6;z{&w̓?[֏d#Y~MØ`8NdOSldeWc;QwMnWQQ͹+i;>=9viI5{zt(^9?Kd_?M_oOO[WX"vQ[$AWFNΎUdK̥Ryuǭ.nh8B_?w} -㭆`1t>z7㢻t\1đ̈́aR/؟ԆXػڗ =m$2^8*AA\ڨbœVF ֢3R])B=*@lΆtsc+@b~z_#)$QؓfԂ)9\r2s\I03#tz7 ^V<2dve!Fl6u99/k&殠vocWP=0;WZ9ݒ+ZV(_O5&|tϹcѥ.;Y^\*Urld' zOKhҹҟBfOۘR|^3x O˜L*Yr}5d9WSXR9d% 5h\XEe1Z6w/J]+ 꼻K'~<7Zuc}/ŏG8 G`9Ȃo3؆VYƄX'ZaYS\^N`x`0)IjxΣC1sqZliD6ٚ VVsUEbK=:8QN0T"S!BM0%BΆ<9*M>M/E!ØF1Vʀ oG=fnH,~ҒV%޾}~]R4S]6{'xlzx fJ p)%yP鉧FNH&d.Ϸ^Z6"}.uCu϶MNcQ&6[Xv:Q6,'ji6fʌ}N ߟhvR裟& 焺$V+mS6|wQLLGe,2p\l:t4m710V4LPM" JQzt(6Ȝ/V` v*/ XXuaI5'jaP&n7yKvmȋa"Ӊ/ Nc[gL`ql)[Z98]l:(L2E-(ya01Dcm3IDGw{g":A~ A"!&BH F{<Jkb-H< 㘽^e,llzn{U|dX'Syj"X|ԥ[fsNi]$P\ڧM(@&X5{XF]bGe^$!I\ KE0"H F$,gX(%eQ1~' EFlabuЧտ2<DŽ4 "XEʐRͱI6XȘ[ی"00:]d]WA ]w0ýөsփms;ur]vsTۮ%6y LfdžJ 0k"g'/uz@S ү2ͮ擄EwW5%7lWϟM[ƭ3MK1pf-^h"X#}naQxYZĨbh!t#a'T)W ;~zzKTp0D&1|8҈lKEt,==Lގ+>ncOsVδS1ڭNo9>oT|Aav\%P,_-nZ-8Z +rHpeVn/>n%=C+\°?=Rz(pe{WVKɏp p>Zu8m%ϠVˆ}W<k*|\x}WVƎWc\œチ+8ߕV1Z pJ"u  \Yuep(p2+%9rb$\q`5 1vt5|Dh$ȧҘ&t`d -Q|j0u=-p'k2YE 2PZ0_Q )#&aծco{*t;=u~5M{ܙ m̩hY\(I{P~/BB @g"BTۆX͕+T&yT [-2YR?L-k]#^ lV[[Win&'=cf9.F1EDc6^g_g%JoM|Y3FHݛnX,4(S#6,dU~reJCA.tȭzܲ㾺 $c%΀E<NNΘPz+r_x߱ޝVY;zFVG;e.ر0-v,u :w-|Y]J/a{,W`r#\aDa/Ἀkfým܂BCN˜ت{8IЖI%;~LyI%XHvdL}9fL?qS'>ՊJljoO$0_N$XayJoOK4ߨr6g%XKK]7:;#/@߱ -C}T疦?ҽhx@7l(躢e+LBo8Rƙ̮6iB*Zƅ68{(O:wF8ά.`6ۈp!XoF8A\# +چIou_$[E밖w:EcS: ;Yܮzu~gkpwlo ՋO,> آc`*vپI/>s^yɡ8zRD?|#o-eʹ{ eBus ^Ƙf>r1ӠhOH%B,1AcA2QKGܡy|?\jP2ObhoܪBO&;]v÷hz3f=~x 4v ,NݖaT 5]mR0P8vPX~iʀ3zXOJ'Es3&|: _> a! )H^%3IP\V`ƞ8 n~4v&}]/}#qݜae1u9FΖ:h:>n#TNhYM65ۯ+-l~,@+vLs?%EOPELdTjbHH !, pz.jN]g:ME{#^Bcm g %R!E$# *!J IJ$$1Iъ4j{qah/Zv[A lʪ.O`@b c7:e ݣ6" C?L௅…rWzrQ9<ϮPvma懃ή(7'ai^b[w`^Vx M\Dp{xdB-m^k%WGؾvXg墭ԏ2;J`-ou]mi(Z<4(_#.\ԕNpuk:P҇~9Zۋ]ռ$ ^n_[Hn\=hr<^7[?g)|2pa@΁F,\ߚg\ùZaWͪUgbPƟ/z TЎӛddQT-*KV2—4rb R!{կ*J5Wx3E&BXM/v6}[a[푿~- :qrqs{LlK#h+z>"M?zkS `L{ -n'άJD{.&qGE/_% iN'g(ȭ0oJ2 nQg~UH!YL+ bfקngC ?DOÈaLO4E$4>Bw { { { { %& D#uD!Ld`j0Q u'q""X"cw U*FRF2D1bNbba4eqkb801'IǮY -qyܠ|;7r\׍Wb _2W3@eЅ3vo`S`ns ܓWc)ⴁJ!L j+}紁Al:r ,ˎ FSMn her&+'3Q3n `rMG-J9,ԑ}s ` UE?vЅgsJdGYBMB c E(+ lrPK=ҕfMVwkh[c}VIXjZ˺)ܨJnNWKKswK-@ 'X0$k$Mi0$N1D1&fD<Ue8^$>|R?mx6G=y\8gDvnB L V>64E'h *-7Vvl~xvvcj7<.ЉȞ)\$Dp\8 Fp&?{WF  lIyL/02<F&)OdQ(%RIRuʬ8BADt m2ZDI)PIHJ4Gr ᕲKiƈYMshE=;v$'!4ݎlbcteFE XIҽY%U$#*_Py3^I);8D{=YoSZ Ԭ?.X|؍uLO5eMa ٳ72V:C{zK%kkܠ}kKC~f]VV3XO~D!"Bobb-!(9@ޠVZN8i6t|^!S2@)hCf{~ڌ Rq":iCA@%,J = ǂg4kT8 =L*&\XX'wج-oN&ty% ćC^nOQfh$([RUkkI8ů p"_Fv>'qQ)|}O|RP 2ꔄ Ol}?yI^x`<ۃBW0>O_Kܷt-5'Aa/j0&p6yb(1(L"d^AXRYJNy鳵)* EDiH9*%APoӹM. }/:(dM9=dg+l t߼BU6M {x#Uk;2&UX IeiP~]˫i5o#M;jk w!yFY17 {bxVTgpMuI*% vSM8FC>lٖ M$hPdA!3c7#I%P Yr$ cTuOTk\*K- e/"9$ykHG2I$d"YT%zprT^s0Ib!]oٟeT:bkHV?OtٸvTv"qL+Tiy;=S0鴲W}|6}/OxzO1Hcb#%`~&71ZL,h"L1ccQǸ:f7S( C"MB1J*82RG=(G (>" MeRL ¡*;(ܢ`tzirug8}2O_q!>+]%D4J9]{LS1.% A9 TZ\ GFJ!YJ01vJ[+TJd@O !J\p&F Vfe^d{E_ٓ[2hhJ{H؂d-jDV[_W!IKT:&W"Hߵמ&%B6]u%iaI3NٺH $"eᔳ%Pt6DTÚ6<,?^AJjn,`)$rJ hKHȶXʬ :v@:6T n]oݽ! mդ-ɠ!L@kym{ݍ/V=׈Y&}lk`l" xIΩ+ zX3}Ț)c&R١P:% bnt3SWS{SQ-Kf=\;~(Hm=<03d |e9{GDvb.4>^.HOxu  :nȯȯ?& ˯m;i=RB;itZrp݌?L N& pKߌm2(G٘qϮO1D߻1,CDEyR|LڑPV 2R G&.ֽDA;xFgkeB!* (*SFTtY$\E+{ΞqRs6}rH]PǞl>ķ^Jץn-)7|yzCSnVMy:2r&Z!k_SgQZSm`;7'!nK]j 7408:<K Zt2x%b65x'zz<͖Xus\AhAeGLCdL%H45nVkKa7xf5MJ&i|0Gؕ۾I9IY!PZlK)$,#ǫV..XȀ`WWd{sHSZP `u7SNDpAUSbG`=QtLg{FoL)-729~lzm!j_:2$ AS,H:jB'$ut&>Z !łNb&<錠,U%L7Q$uB L79N.|Dea}2QMS:BkDwF:o/OϮi5/,^.CҪp)]9v4p)G׮ݣkjlSTX= NdNLOVսJvۓ0 MB ΑPk=̪n08t'.V/>Nojxyr}EvuQ5Okk橞 hVF\#9;OjS iczix~;;=;c;F/# QJ1 x)m#9b21 ShGIsfo c OlX՞pX'(mk:6IWBhJ6!fKŘMA%Txb'5^tL?rG_Cٳ`QQ6H$ ΪʬL7R~{6ϖjVa<\=ÛՆXBHI9֞j?ĐfryV'sgrg:hRc.QIX)罥B$2Lye,wVzA^Vgkrn9~-I>׮S>=kR#wyI$b2BR1\K qܵQ 2ad6^Kbr,АNyĄ;jkrn5nU,ǮIH=Еɝ>Jp]ENjه`5>mrly7}~ձ΄|ݴ_>^FRfJ.1e ( -2޲g`eUں꛿,Ph~4/pbgAs7Z1O,`lnAƣ6(- n\d{8Clr^:؎ͧL6m/u%fb8n 1BZGW{j`$1.(4,tھLp睤`a)#)0a7 !eHa訁51Xb&(0U@W5I7oM4ݚ4AU{^u.uJEFZsϤpB<(h)8ɨIb #HE0MGM_$ɡCp+VN9M6RKʓ6/^9Ys-b*UCmTBm΍y~8ƍ g{x~}uΩm7ɱ!>%݃p :RWxN,r&T9/R@2Nu6T'% >&b k7WW<]tGu -u"- Nx/:Ϳ,o!Lq` Tkq!a DtSqX*"e'ƒ`00grDdցNZj%ʧ() R"'*pI{/:SnVK"|fSŭ`/gn*l ;Yeڢ֫/(-#'^Vp RyCp)NUBdJD\Htd zƸ6)1u&=yJ `ZS? L{JRag = J C dr9 _~^`u~jZl鳷?0INFQ#Y Ȗ6 xU)@=7ic< eI ,2\DEN{-&SlIp 0c|wi[4zk4Aa]JEgqG~ :Cݸ p{C?nliNA0?(u54mry\bNW)X&\}^GIɸfԕOpqU_SZ=bRif?~EH׃jqغz,W_d39/3B]y4B_1_oRm.t .,y\ rPtr2LF'v#'oU}u2Mߘh2{p?-YXm>p>rgӍZ&'>'P ͦ.Kw gdd8{ *GeY̠~ `<'hea.qSId1 ~Oi1xSx{i$hyppPJ.8 0axP lqri祿v_a<_v~NΆzP[$E-} $2^ G꛳Y$蜵᩺/|͠4Pgi 6=~ZLSt[gX93`?Xrňלgqp:g ˥/^%r}(‹Zѫ07}ebXweٳ1V\b0-2t2\p_;|_>.Q]xSOrcjͳk娺~?,yw۸9~jLܡ Fi};\x\ν+#~!ikgrȘh(`1ÔHFU"[;/ϯŵ<޳כDq:lu l4Og/gt ć򞽇 =l!;Q>څgU՟ 0|ROxj.Qԫ'CLWO~La2X͊Bel*?X9b'/᝿OgnGz;}`J ~9n>ɥ}s?[:~* -_4 g7 g#9?6AO'OWFj o5$cWUOc]><ߖ7<]ñsKG6W7ίkP\|roy/XWhאf練Z!m*L؟}?{vTAk>LfR;Z;Wp,W(WRpj:@%J "EIbpr,WVѮ UW+ZP)W RR P ]͌.W X^ P*WȮ TJz\ $p.W(3u~0*7lWJݲV {N#z\&Wj7JW4[T;Js+<\SUSGAbqqX^p~DE@ #aV2,p7GP9ȚҚ7*Sŧ|ݔv %Msnsǜ߳q<AbWq=- |kq^'?p›zg\KBQ/fQẾt](:[rBjaPiZβrZu/ߥg~O` d Cӵw4hX߫:ՆA_wλJQK3Kj)` s2Y&dRivˊ1zQ0]7zQic2ER [aղ\1Urqŕd+9u\\Ai)B 6M2=WBYtABˤ*Wֈ Trz\ Q V\Gu7@B_=D\)-̣kW ؚrڕRpj:P=WZt(W r++h1SVt~0*qu2w^/W XrP.+f ί JO"LwyA (W+Tkiq*魫/WUlMcfIȀGLiNg[c vlx'R=Sz7zϻV=6J漣w,U@iSUQ;B5˽fRs}-Uu1ט:0D+QQcBˋ1 QMef{QBf%aA!\\&JWn*Qpŭ+(-W(Rpj:P=W-K6 e\Z)+TYT2 \`-U Zu\JCiĕ&ʒp!@[BZ+M4! Vp Q 3wj:P=W2EtABvki)­:P%=\[V9"T d; {_M.3vS+d]Rn+U !STpqqWN93u\ҡ.;tcT L}֖n'٥ALAF/pQыr,MlD׍^PI4zc޻ V\\#JTtWk{\= 8BȂp-W WRU P ]J0KtI+ xu\\YujMq*mCĕV@ b"q*quR3VP)ǿrJԤJ EHIsW(Xbprm1p2u\JA{\ $Ҕ4w bAfB:D\YMQ+(ؐbp5 Z+TI{G/WUo]Mw:W;lOnj;sk*dl|%zԀ#$3nucYN.lc Z-Ch}2Tj#:'hFX 2baE)0U047fJp! \\FKBJ:@\qe ?W[B+PջJz\ p *Y @B!J eI XrrQIVvWRWQnj󵔳iJL,z?DeҚU[U'UW6(F .yZzՔ–[RVzMEn??n4p~.."L?@!-jʧ)\̍b2]n\Ǖ_'-e0$3ƼC2H5"Lv˱Ϫftx=ym˪^Nӧ'zgsszNO1>?Vħ{|r[zC8?88-)}z%Lp'v=0UxR#a|^q#^Tv"/@$Q>1Z1-y2 17_gfc1>bY!c79?>O@kW@m]ۏo7Gzր=]2rlP$k ޴ &dr$gUn:[BҮd O>۞_">}@{~Eqs껴:0Pem?]åLO(uw5R ={&rLlM:.N-iF;scfMNthV՘TU*a>\Mt'VMݫRyatlb'q=uJ[W"|ܽz`vL*k5ScN 1h)q8ܜh}k5D *ckds-X6Fk۫N=hS|6n-6-=4TS;sX;qisd6CIqiJM#re{]c$34sqscs1 EG B%ލǞᎈfd>㟭N YW1tc[?n,h2ѡL2:玥(d}|BȥaUlS=^bØ7*>#Q"OpǬ+G鑤>]v.Ri:d|HE#iQ1'v!g5}0 ԜIzdU^əJjݏcJIIbU1HGQOp׬#5wnE;If)((E?aAۄE)эGrPR<@<%!Bb ![f8zm\"FUi,I;mgŖbSsHFu)[_c cfC)$X ufݠTGhOM6TuGݎ`FڨQ/3~ E US`,ڢ k,v**6(:ݠ-;< u<ԡmӊ#5 (%ʭ2Tb]vuL%>,d0)VN5![}| LS`{:+ Ņ%dCc:vL\s \/ +Aeأ(հ֞gC*D@ %ջV(V1*RO)x_b ~Ns6Ajcƚ #ø*Mh(pu 8T% d> A\J}S:n3Jt*T[!z@g3 d2ymB AvE=ಮ ++ȸAMACX'-P C='X !a@YPќ=4vҮSwD\*:·P[s6 : vfM0LrG !.A D7)ȗGC':s ];n `Ʈ pqYi TLgf(Q@HseԸIj fYR(h ;:=U qJ 7oBdbU.QqϪ J H[.X"r 94R( LJDFࠅfN U"1۽B6ф=j+Zy\54-dg9j_ўf1#.UUqu(Not U(6ʐvH'"/rCi2\|zlkӇaď׽ItjA#=^}mz`-dxs:p4(O-:v%S+:XF SbP4XhhVh39B{jN\dUB5Y[ PkQ4XXyy`44/! OӗMd :Ynm[ q;*p ]UBN5~T}LUFƃ d-ЗBةi&B%N~>:ȻqgTbrѡ2,[е 2"1wPڧyz zȋY4 Q/ѡB/10=} $$LES{ X1z*D:Zζ$v ԁ>ex jw , "Ϫ90V,Z{&FσrDƦ td!`Qm BgV*jlfWbd@BUq@ơ#"Ǧk[0aVB]%1#tӈL+4cvj0ԚϽ4n3lC$55Ԁά!ug6Rq魪]},e$ԿYIX@,/^i6X=USVy64Xu{nl'eX/n7Ĺnt&ϸZ0:{Lqt&f=)[ZCQ۶TɌ2UǨaZS1Zk Q^QzhЛAiEW#[*3bO&Vz7$%xKT]\SP.Oh7"2Uw[3hogMJT9 `-hH HČ* =>rwoު7al *'a+Aqj)5OnF߁Qg +UA Bah zr#F5X{RuN` LB)1i)xc@sZ@[-ܬhZ+k֪UVg(iI:3BBv:Q݃ v[i\x [Ŭy5b* gMZ30hlEkiP/zBLf\RčrzGz`lEi8 88kW$Э?LE<` \t*NYc6ՔWND$b9\PY 59R$t$b4uŲsL\- H_s]~B:WW,"<!GRT1xko>͛pp6@hZMhq_YМz^nܚLb^sųs7'/l>bARح6|9[itq1NtwÐ\f>rwXz~l;ӻ[m?,B4qpqVNͷ]O7-vUU1mQN ,.8HzN_F&N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'o , df.'@Kw%zWqB'3Z@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N*E9, ċq/ A@ N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'uo*rHr@b@Jx'P:#N  N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'q}y\qzoqtz<\

*GKƵ8,׫@k@y#f^]EI-`ՀRj}j^BW>Pꁀ~^zܨ³繋QRxQttuxգko?7+7v<33| 8q=[~[0im3gCVWy9WڜY`5ob61Տʆ4qw~ڇj.kϽ{/au?jsSV6p=Olwe[kikRdZ>\o2.̯</ҧo+ m]Jz{u#|Gm|ʑUtC2j{,mcjFw7z%u7ͬɪi]B0['-rJ)|,<-g&5ټn ^։_[1A^`x1w5q)wE@W(zjAtbj%h+t7*+kY%m)CW8ڇwNW%+WHWdy^]0?qŨ׾j$t IEDWVZ-Z/'^]9|t5rgJt(I^#]yfIj^\f5@I^#]֖ CWr .t(oㄮ^]E& KRWCWnX ]/~jaZ:]{.~ƍuՃ;3]= mxfA(zat@WZKu{+y?鍺#Faͷvmct]?>w-m t$i M.h|ʒ8=gZexď<SAH[|ܣ`iuM_OѲp@a^|YFwV*-4OR[mwfRp}!Uu1Hc`B,';s]ޣe@/V9 |1Sȅ^z[d3 u2 #E}!0E ZKM0lH'E 3%-+ m ]!\&BWV4'HW\i#ۤc!EWt(% HL F]!Zxu( ҕԂ6-`yk ]!Zxu(mGWOyBµQWVPtBs"]i2gPFD{ ֨+@Km:]!ne++vWe X< 8ᯇ=y?4HWmDWBʴUM+@iMFnBj T0MRoѫWM[T Bk)o6MPJvM>).TA[F,v4Un&jTݶV.bwI?{ t(U'c0)[DWsBM+@) ҕ p"QWV4'HW 7t+kZCWVtBLvtJMm]` "\BWV6f]=AҔXZDWxr]tp "Z+DiHW Jm 2Қ0Dl Qn)ҕZ6E`ӢA{2CkX QZWCWM1OW+Gd8 9 eMEzT2ݷJ+P'190m3UH堖5t-@LL*Z$>~GFZ5] #N ?I1Lj'j]l J3-gOcօP*ʂq0tgnN6ɾ?o߾v[8r?1uq\.XdrbFJ>Ƿ}EJ[]]j> M>Ynn]6-/ /~{3_=);27ج7Zpt|/'>X7":djzTIN6o\׍۵J]`{d|TLIYNGXtkPN"&7eL;N@LELgigO xo5$. ԥYE.4T 3xeR11i5eGmiDqNi eU?Ye6~p1 b}\q v7.?[^Y}t9гō tiof_CJ S-:x;fP}|ў_q HĻj Ml:S}jJ?h92y\{FXUhFH.ԁhk]SP~,-(MբUﶍ񏩋)?U$R=n:t6ؽTPE@bŸ[=Ynhh\}&Qn6O/a8#_(,]ROV~^l8Y_x+ Fq OI>ۋݙMHOd[4NruNRIȫs:ݗ ۟+5ЋcL+~ZQ|:o*st:Z\Wksr_ʪtkr6}3`>?L:Co{HWڧ=,DE;UT%OO&Qpt6wʞ{\~!4I6[u4n : տJkRUduF$y_<z^~C8h0i!"!|/Jnue mF"k,oΗQg×*Ó4Vb;\(<_0TNt*t0)K&;hrTG> I,BLS˚N=ͷ U~g&.3[2@7R:48 lYY( Yݝƛ)zPF*V h&voƕakƫ@4!?3쭃;x s:&[hD!H %&1A7(`z|1NP4\sp8-[ 8e-C2( t}ÑhZ8ҝj‘*c_Q :>rR+τXC49q,8:q5W$NC*6+Ku),g,"+#z<ф̼Y:687D`C7;Ǎn){sG8OrpyV59 aS ʲMhYY#T'R Fᬋ29nT{J3\RÌR(#VH"I֪:uJ.%%33xCmJ@qR!ehud1?"]~|=_`j(|jo{7ܘՉ]q^mKYJ5]b$8y %UṀ+7G1!16&\.+#t1qd J'*@3j=ZFbJH ҂S >c %>kIJS`yG Ʉ4;MEkf̈O4^,uj\gmVrG^df^/vx3)\=O> 𖂓!p F*9 ,L_$ɡŇŦam>\w0>m :\jqf_oq*vE\xGgYڣgbg3Yu~/| A@vx@hu4ɖFv6QUdbKRd"3+3?mbd2DSDduy65 ]%G.)IYzϬ7T>(= JK !Br4 J N9!zSa=T(8dehQ~?1YH_*!3wn b oA( $&*x*/HLr:﵈LMdQp ?\U:] M 7'9$,_ғ!CLWM&g32Teю[{|Q: 'K+u556;Fc%oܪ뵊:Z9IՕ W_oIIiLDOӏϫg'U i8_ś&`L<"r"B9E</N2З8TF-7yz1~٨ 1QF2:;w8~ntf?_gYsY>_OVV wGGrz[ zGg?>oBh6j0pgl qLN0C*\zyBzU  {/>RNO(1BcSwnzv&eRB@}y `mPV6Z3 NM%RTCX3s`;`#` `W,(E0VƌK.j*:%Q%d鍡 5~ .ðO(2xÒVu6m1M؊l@$(lDdEMP6F-$j]mhHiu3FYc7ۇ۩v}*P+ֶk{q'Ua@y+DU]Lh>#,˳/^ b/"1"ۿ.l]oR=,LR]QK䋆2E8RRx$4|֜Y~o/FIş; /4eBQ2mwqho?"ϰm ؾF6"5^O?k#TG]wM V jښ9r>Yw7.2Pyn, :X^פꔙ^ TqB~P.4Q$ؚE#jA]BCPSo ~v9H띃(*' 9ʮ?nOmqRFha.)GcМGUJ<*jQM_^2ZI!('P+9G-!14Q}*H4Cibъ27Z8|'|Kj&t 5)+PyDW"$۠b3yy@P\tP68%## 0H * ʹq愴|LX [s{W9NM`#!')a"d*oB~ ?HG- 1 D}`e 9׌3e ō)7YiK)AL21jY^ϊs 5|'fp5?/ ytP 'O#Qy+X)H%Y;A}~}0iiH D+ S^p9DVyG1.2TwN5;> x^ $C@@s^Vc.YQDp*؞8*e(E^*~ cZ;Nz,%q63Iɠ) *ю9  mX\E_9da[x((DHˆqAK6R'!B`>!@``rV>!r*st^. |;(v.Z.C(9rn0=Qlf&E+sOdze0]%kq> -jf #γ$/uG1GE<(]~J5/s!Jc3CSєgCR3(bKIҡr4IRzIv Q0Sk|Fx*PM)ԧZ%G&o}U.}ys-by[`}>rCL(w^dxL!":^{\(aVA $K GSB`S!bف#H%m11 L@h"!ݳH'yNSÏtL\U.|ѫ,؛jΐf9X\R,WmfwoҜsjFIaN{xw 4~-1]//kZlcoiN}ŴŻY`,DYv\6\:*/Py\T Q6ѐ/uQXpFFq:pr.'UvH(A"g,ck~ :Fbh^/(DN@]rgQbX}űr~6oS:7n.VE~{@(Z`bo@@3f cv)|ٲsk6޸ANZL]=sY_yMwR>džUϣ-o>kj%u] l,t} N\2xHy&2bV:.W ނ3>[|۹;mC`ow) D&F&&L녰BT)_SE-p)L1' Q= S+n9\i @H$TII&**sLHH Jd4Ok^X b4H 8$ Fn佷9lz>Oq'\yP:s AE,$e!% (P;e)\&.F \h\5}.qsc#Vr&)XDb{9⾭ N 9m5X+J6UHZjR;&4r=4kY~w+WGeoCxڱF8jRBusVIxM 3:9x .@Ve~Qy4_5^@xL~<'&d-jIwSTX$hH)PxG<63}ڼ.MΠD\O<;p Z%]mKhR:J 2xF{*7`!e܀G)b V%ޥ\rO) bRJQs4hq[ǃM ڒkkc-M ?#Jq! 2 ƤIϴ. KQM. vD!(BcHA,7(M4qL&tɔ.[Qe5OfVf ? Xib Xma8Ǎ#((\< >OpLC*siT3&:* $*B*2j4릈l:_E'⠻_n{fp=cNP#({5)U.;SNf;!z98U}Xd$VKQ1hprH@,N4iR☌rP3;8n$qw"HpXlYR$NvFDӚ%McuMjXS 䤫F6\:K@)!3sMl]`#=I¤s%7|pkͰeqJs] =:Z|z[7r?۲N9t#P~mFvo!{ )m )ôM n+ ˷nVy+W:pg_o^ݻsnwV 㜦iN>~kFK)e4].%L,>˵;X. |9Ǝ P_^ث>>v73pTic@,gJSKJ35HZBmrJ(#e7780P/ 0ԤX]IxkL:ny jcZykN^)#s'8#DisEKWTHiD0R]yYWU@BS8]).Zt%uκ"2AM`iJqEWJKqRYWl+ҕ.&"FWRZc30rt{V=@GW# =v׹auՏ0Gøt=t\m\o=ܞxu)/ N?n= ]=-͓N Crs9ܿo_~nquqs{h+gL*gZ]\X{ 6Ƅw?ƺ ͻUAͪ/y/~'@n "I|Z\/g~8YtoD ?s arM?n-$J jP_1-ywy)?.],K^Oix?xf@%R^J6˄:q;Am Q>:\TFU>?"bSQЫ z1*s;zu%])Jic2u5A]9O!4(l5R`jѕ2]WB0jB.t%c5*])-۱J(VuttEF_SgPVqkѕ҆?UJv&+(T+ѕ={-^WJ)*_ؕ[Wҕ>wu%njb>] p@FW뫙ȠוPYW/HWgӉ1U?``\k`~n`?Jqzf]= c]lK1Eع`rhLEVѴ2բiekǮiD;kz@weT_2 6š<nVPW0Re^fʼ,ܠ͡ăj9lMLkYO=L5wWd;DF6="^)ՍDzjZܘ}:Fڛ9`lB94]96`wH{tֺʯ_=L7pWggVi~JoO]w*Ggvyu_/m*}ϮݯWKz>ɋ)m] hLYScW_Ç/?rOozքlEf?"lb-u(Fsd:{WyGW] 7aRJ7jBBMM=ѕ"]WJ㬫 JcJpZt4ΠR9<`jRJp Jipu%M/`uq/TU3Giium` Jѕ.Հj >RJ?GWS;bMo1XR\&R8+d3Y88P{/`7x*~qzh đ-=tg]=Gף dia&@Vujɬӯ\Ɨ9|Ui}^ QO2^4W@ CEA*Z^j~I?ʱMoZIJjtkѕ2κ:䚆uDѻjtJhso4t ~_W (.Y[lYWa6jtJhJ)i~`;E]yB"]yJq^ُ֏3κE;2z4Nqc5CBvRy:$u=XӓXբ+8'BɈ&+z=U+LdPX͓AuC-G'/GWaϪ'ƳЛam0 ]tckz*̺zzm;Tf,,=u|蜓ծQ{B':NlO{UsUW-{?r5z)8G\Q0 POFzR O0֢+ϝ[p] e𳮦+Ď*Eol=R\&RZJ(汫)GU+FSؕ~5iJiqAdu5A]v"])pGWJh-3.κ"{PNGW<TZ?J)dpbft5-BPP":AK?ZƱJ(̝gۚw5Eva)҉ !v.Bؿ`.C?Zhzl?02M\`ZI+W'\A*ݼ|R*9nm9WIo+ PQ0!T)wJnP2?'tJC=}Ujt%unu`ihMFW])mJ)yuE`FWkѕ5{V{tt00s5\_u[idf]MGW"BMRXPZtnRJYWUƣHW])JiǮ2YW[U^8ԣ+:AK/)#ՕRyz>16>=t 8QF?ܡ *G#=tų\V>1ma D*Ysmڊ*ǘ*Lp~]K>ZIӧs(N뀾BCaPO0 +-؃a f' [vkz;JqmEWJ וRҬ)9F0EWJ;T^ #dD@5R\FWJ;4~f]MPWJz+ŵTuf]MPWF0r5R\_P҆;Q8j Êt%\49ŵ])+8jkҕg]qYJust5I]ckҕzƮ4Jir+uud` Tnf9.[aV,nnٙXLt;=KRtӼ]x_^7 tH鬼F+)u{ҖQzN(~Qu\ՋNIggMIy@4~:\]ŕ4́ {$sU68[}VLͻw|㓣b=>k%_X7u]+z#mi4ŻZj޼}-U7je͋5#QQ+ׅOûNˎp GV#o|¹>5/NHeށ>?C{/vDOﺘ~Mɕl/G<1K_dɟf.bb>66όޙ\ ;P^1']{7k$+CaOUyZn'W}n?K)Ty[~m1-c]dNl}&H5-q$oIbq@N!9Z㼱d)[۴vNl`Z~]':)$~ZӀMb(iAei AK@9ߞM22'bw"GFǤ RhbRK.Tb+h9tRĔ}洛Sþ xùTR5m·B"@.K$E% SLŘYIiZN{jm1֓\R5KQY΍ˍ6KD 6D ^w#-1#o.vG Ҡgk ݋6 .h%:iMals'U%1rSG17E*Z*r0x)H[K#FK+9F4Bl(rsy& 9)ix"kʳXs"sFn|qoV|B!yoDmSJ@AM;az5hڐ"4rњbBRscۣ5۪n0WK+5+0Ȍ  ʄWF42,Q5 7!uTS`N+(_PcYTB29Vqn vEm%||eY r3J 70c)(|kB CyLDB!dPeEHcoMOEfJ2֜G=>8`&\"0S|bDJp``gSO@ar``=2t/ Q*ACmj((Sѝ Fp7GRF;U L;JPH_($6(ة(Hv\F*.5>*))}JuN32r^5 5$Dj E0PZl-@E@I v/UTy V"50tȠ +fPir@E UD1MDr|T1&bb;IB1!MvnlיЕj;m셿^ւ;VkW Pw!>Hp>DD-xxsu`KFtdYt4كҕjF@+XȆTyt_V 5ȤuՅ: 7\,:T& :hKP-VwV4<̶eTXa*uwϗ9u- ȓd}E>r2XkxhDPHc ֣.O9mڃ^AJ._>cLBP}ֽQ׀R"Pc*2k PPIȚ%`mLAd&#E4Q]  Vjw7qVUp*y XT*e/@aIY6H!(ƆrĪM`k>؀ LkJ!=jϢ;q$LD4cA(\s 9i9RQ'y/a èIP(EeQAR܌EEH,ag=@U +`~xDmDVc-S{NfjҢ:j+Mg0jΤyfj)څ'k{i<9^h5ы7[E05J`6B z0XBQVT@9ltEia/B2)z/$0#"ȃ5>dNG=zMW: Fe(6tqHp@7ah .* 6j\4bU0]"AC1 0&iۂPd.XIWK4b/(m*?t+<"B.8!5v-j/rY 8 s:a(;U"FVu;Q>{x4'b+jC :Wul/Owz6|PɲspvvI9~ɦI'_Ymln 0 gM^xT*"< ͧeˋ#)V||/lҰts_B۶Q֓i>C.֓a 㻖@wROm'Bwҟ)~UR.6؏g-G-iB?YZGP-Mf%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V+6:I эG X#h@@J |&ЃTEJ V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%UsEp( {ʠY P3!nNץjb%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V}L%-_Aq@lTQ tk~%ЭQt&7T]JRi`%+~%Zx{o%Ї*~St +X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%P^!z~L/h)zX^oԿ]ZͺIY^,wh7&^W4{/\JcYKo=؂:qm/%%30Gbkd͞M>&C&ٴYr{fГ?3yMΗͭ`j mc;);& g:XE @xn"kz6*sҢ/)/Ţ@6uI,pPhzŮ.3' YY^^+O[[a\K,Ѧ[?;ܮgs)ϐ7Ѵ&E:?|d/_$iV?/ Kdsm/ed;= 4Voe`Ǫ^WJ ?Ң; ~Ыq5M3n4\ prcj w W39mF؏hMX~&2o+K1pExV*s41I. Bф+kX ⾇+C W 4wExVq,:%+e~7Jpp ۏFvpGjPp•E{}0{SALЪ:'2i쉶F9#ѺC֛{>xN|U*ӻ4&j9o _^6鋱{RrOnѰ/n??5#I i<{isgRt]L耧עwY-+Z2ᑻQ}3tBפ;SDͱmdT-!09O,?h0 v,dh J ?@2\0rL;1p5 c W6eT`^{LV"ƍ%\ڏLv(= m/GƱ+B}W28W0\YT|d؀bg:yWdp<ŕGOi7?n׭^>?:ԙmƱ(ƕ*6m'x.2{' c>+Mͅ'_?}?o=>LL8R+9G+S*^hS!YVΈoLjKީ ގ6qgf$z=E/fϦ_u;ݝ-&勶C}!js6F8B0B+}gR1x#kF؝U=a{.átËի1h~:4~:]=r1Cnx88ds'Ly{9ztgۣYȰw?[]"Zm~/||V2<ЇWpm-z^̷9 z۽by:?c>u/mg&/g맓[s8㝆{БӾ/=ݘ{ЮԻkg9O hlyn),I:7oTrՉPu-hC޵gw@߮~M-p^]$ÑgӶWڮoH)Piui[LָCDPhWF'Qb#աߌOS~&ѡS\ov?!%n6nd:|W4msP>wno67gtkTû E1Rvϻ;᝿JlsyTy0]|zm1t_? t64o]Jۯ_joܶ]mv6ѢuHwL~ߋC-ɺ9r¹GbZvŗ1Vr9Q-4|WAQPnn}}ٻ6nd**pV%>X4$ԐPrTyӘMET,jz04{{pG-6O%ü:v. 1KBH̦(ӌ<ڲ"Y*[u=]6>87ϿՁ)5↯EnZ˲7\ʋ/\w <#d: qa: \WS]<Į-Ƽu?cN`0{EF+.J.JDHT<+"Y@V-8X*1g볽u΀gbO&0QAqҦ@\ܚ8-xtH %CKh rYtJq@X)MIaTLA DH2R%ww*-#VyRYKgI@x4͸^kbBˆښ8 Ns(Twn)L~XiPzۦ%'UɦU՝^ɭItڰV1^M&(z5l*Xގٸ\6/|3E3j0#7V+0%H'iʻ. r:xkplm#}ȨN1b$oS#uL3dѤ$(!R,UZ4X[BuG¥Df3>,R}Nt{U&;.uhX>rN0CwnbX > Ё#V$6(.C \%.'ݐɞpg6A DFc(S"A5qv#qb jv+ J$ONtbpLGQHi;Iы%tIV0c7 dR,F@bLP8"FcЮ5ٍ_t:q_5x("4^dAO< 12V*6HN5r\Sh %$}iIMJS`)㌏>J@QL ͝"[Fٍ_G\,sڧִ䁸H˸hxmϤp@SNj$>2NpN:MG/9HbR8cakڱ'x8x\WyU#w>eYQQяO(r|wяˢ!YK~+U,:I9Xy''\ B9 HAjϳeщ`a?t<=~<Ks}bXDHƩNjP#K9d >F`5(S 1z?/P>Nfr'oU>"a@_ }c2BDY9Cn()Rp\HdHI2!zhr#fZBs5>3$;РVRi jjt]mtdz> .n5T!4BUᢝ37NŠD# '`ID줨l/; Ox隆b\cq~0zU%u(d:x[~ϳ0%"NZixr8&(%.uO }!x䯓%wTQ}b?^n$ b8Z|O9syv ..EBolW]A:JQdLȬM1//Jlnx|~jí7oe?u҈je=7XZ{y5O&9XE9T5otjYŘwRmtcU_jjIlZ\"zh˽< i4e"$XE_\x h *r'Ԙ_Ko)[`q͡ʻ;N Q[Ʈyb~q[*Rv<)v򳻄9~ 5uY> h@͕\IO:翷E1`-S3x" z uڱXQz{҅lz= 4/"͓y 4/[w$N,+2n;1 e&N|䮶4 *;NƬMd!ژ8tA/k`Hjw_r[ӍOo~c3NArC>SKM҆|bw2rt@VJ]^8XF u+@,%J$Pɱff{w٣e^,ÓTrZDcS0*pK΃D !=#14H!C{2 R0/ `S6iVdjb Df@zeI&.!#@=ضdk ν? VM>ulyˁ>wB(:w]X<] ":dIw7SW]cA7)%SőA(߁ݗŵtaApEeӑ3:1)"@,q$r*b10RUD!=M@n*rqs.AE`j<2<%-p[r{.qArm3K2DBV/zNkn v̞Ѧgt`FUBt=Ӽt..9?wn/;fQ։0aFӳo]Hureqz|e玠x!ܒ 5$Λa(̺ X+|̽Gq1Zttjsx>m{-ݘӖxȦVUsK2OFMTӐF`șhvJ=U.0,\"~9,C^ZDUyy%yJ\DDW:G鹫e{%Odˣ*7#v' 'gh$ݫW_~UO~W8 8%2C$, ¾?J7e[Mc{wh6zۥD ~}]%p2 B ; fZxI׎@KIyXnLb)uo^>nJ9SL'YSD3'*RFPh>6믹 aO6,[Y4=OQZ'O1eI`9eFr>j9;PX Ip#-p05Vt*A5xP ڎh3AfY߽YPrc(ύΡ<ܪCi2lge wٻ6dWE5/'0նN$Q!xEPBG8-ϵf꫚꯺P'a:KeZ"oz9UW]UN9ݫ0uWqJ׽&pupO)CZ"TR=Ia'j^^&IrID Yoz߶IW$6hCq?"Yo.]u/yZ^pڷ =N&M:b'=^ NVzv1 `& G^MF- b޾l|hl)liɣ.w}Ӫ t1HY_*&&M@5kz8Փ;R 3R` 6A[#̘wʔQ*5%OVEz)ǔeOj]>L,kmgMV(A0{d n,1k/&,tsBG,y{HMqON0q\t 4ۀx #DĖ17Oy=*j2ouuv?׀V49٨bC g4#\9wvڭo@BScq (Eh gOuA> H=RO ԓ-xf;4B)J6#z!bR 8Ø Paxđ 7 t`9]K*K|Cܲ2s鈕@X<`XA(Jhf+N8j1.K+xxu!OGMQ-ۥt719R|Cɹ͢cW3r.*D0Cj64*IxwZ4Quec3V rڍy j>rzyr}6L~nmw>7 6p K_3 +bI pЦ\bX 2D"DpE)NQq+\Mf.s!em2&t3q]-X_mj+~\c c4^?[$"^'/z>{;AK3tۏ4iȲ,$lfD\71h&9o"9U9bslgBq:Ǐk:[Onm95BNPۥ] d };+H۔tޕ~GD|֋qK (ԾrvM\$(~鴀E}mnݶ-_ KR}mnXu ^~k3\pKʮf4rэ#7^ѥtk_;B+|JЮ)j.C w?+-I DYw芽[NtSЭv?]+y|M\߻b?`Γ}{ؚ+ٝ߸c:\畧grjd=ymDsc{.CIlRkgxf_eE_ iA7)3hTBIF(ӂJ ځ-(hY. QD s1jc%JYD`i C=ܩ^r16$WΟݺ/,-g1w1Je2ȔBK'2%C`R\1[g>>3=m{p>@;0G a4>->P,qw Nzmds2Ƒ5B ȁGiz.QogMɣ*:>%#MKUG!DkS_\{4*+8pw@$F1aVI }A( fVzY9! &G"x6'?so@7kPt)貌P 6SJ2,F+L:6!$;?S6d &eȎ&%cmʑK%3>R%3+wтq H&hLơX/Q'@9QHh^)|Ա8]Clm_N3n$7^iG϶rS|\9T$EDLL$KC1,0fq .91-ХmJ-L1xR3`!:gb <&Q64R4QR$LOSʉ佃8Գ5ft5g2UGɀH,##Ҙl2̧L"[I[F0G2&me r44D%E2OBy GU j'qҝΨC\0|!= ?R-y!HɠҪph8ڜx!7CU 7mGn'hzȳz4wrvVi•:fcNJFm!ٮgYPUteK]E8Li&Cшi}r] Xk4O0գz,ؒ/kQmvo1 ZX Ϧ6a!M{Ȓԣ@ cɑM\?atu?:?@F`5 ~OZ+mt?)JO"h%3Ld.@VճSgY}mƫEqmh҆~DxCkh: ]zdϤ^Ҿ%us.dԞ'=ή?^7;lw9W׳ywz1|IW0x= :bR{%ްl[{QK˳UW1u_̧k2*o(`:zXZDP"Zt#zjz#=(CR a*(O*!t\,SqʜF'8^n>⩛[?ݟm bͭCAz׶GwG6Iv0񹻡.*z FLڠfUSz7J1DtFi~h%#R1f>y5U,}9[t[㈎Sν#fӳ35 ݰ;G 1\Lj#$]01ّ999܀9α )ݒv^'LX`,+r0^E^XH3 Qk/S>0+|H&X/sH)Er:` noWMPή)|]1CZ^pCV,4#R8 J91%4)gDJ!Y^R; TFM(z,mQho9Md=ɳjܠ GS{ جLxj~b\ Q.GQ޶&|e҆7j3ifYE.$b r.&11'!d/0!zU%zNfc1r4 %ji9MX`E L$[yFle7ܧqMhk>4ä狼ҀV!-o<(t45p4䑩75 ^5nZrޣ2r. @ R)PL& $666tDÉ\dXHIDUQXe:֑J3;XݒφP)&JqL9Y烗1c˜gK%@M1;SYAXST􊀛c\ |N3iMIIC";pI.ɸ&+Ge99c؇EPDP*n$y3A^y.ae1rˍNDƦhP83¿*.h(oj#6U C|! Kό#%x(H`e,*z\(HZ-)JE ew3UXgcZqs1WEZnFŘ+olp~|^b4" &Ԅi'LN_d6M6 >pޞcNx闧{Vߞ=M-UcZ_B fXOkO' ¬Qrdr_j|_qR "7<*Qs*PU&; 7"(*pD˔HO P:E"H E6ElhyXBh} hòkќsNi#~Q/ 4QPVRsmE%OpN8kl۾IG*<:"itL*" 3âҌ;%t 8 (O  TYxШ}za݆;X$dT=P3e pe00bJԸ`eif7ICv|JUl<9ϙ.,:fllٛqEpmb8;IR dn/ٓķa fj x*X! =)")RLTU8A5<gJ`pUtnP& -,I֒FtI?QG1J cq\g^ ?ޅljjҊ<\Ze8<Oh1!$"I5p[].ڿ0\“,?}0Q͡<΍};k^Oak}DysbvAbKbͧq'GR}6+#YQLI(2Ps'tjm7؇ f0bbrx~஘Fu1MneU*2:>?!+cpV6{6*֩VnLuf|zT'IȓPȧ3,;aą4k$9*g(6QSe$b8u3q~Gyӗ#jFxa\Zzܥq>ѮRF-N"4r`OhE0`^8kH,NcXpt:өd+):]K9vb: zYtZէ4"aJ$=r9ڟDg4Qw>'Q)oTs)ͺ2EuPɐ-Ē]TCFHt.c9C){3G0m-/p |Sx[]ږd=x'>sw \  Hp)q4"@D Rfe*PD`!b۴ڔt ƍ"VFudXDcZ)"V:D#ج팜Z@B Rouܙi6:֏^6dT^OW`[泰;6^X͖owy} 5-%{5oST #*FjK ^Ȍ ^`B9ډ;#zdNW ;b EK'I3>XOR0}A'o^dR~Wؑ!B nG"M@c B3L{Y{T;A(5:4$cxTH'K`!#a@NSDN:"zĶeWL]AθcWP[kWz@(È\)e"20i A Io1Aa88f`!fX`:J$(!E'"cSdQUA1vFxX)m6!x7wED1"{D2o,@ 6 5A^9TGu\c 6 #>@8XG!1pzh%A`Ir,i$53G:ƣ3r1`-bµKgq-qqQUO83[gA`-9 .$(.ҤWTt=.. v;m{7{ej=r; ǮE\BvX~(x!zWہ#( \s^0dxN%C[M$A=9=;j6'[*Uq=8y: 1^S,F)I%bPo(QaW4ZFیKFcD2Y+DP5eDDL ` `#lDc3rtYtZoJ]tK|Lvy߾P?NN4}7N%Rf{_꾜#jB8)8 bXJ,PBEMJLz+JO<&G{A-bXd )s 46KÜŀf&HkQ[gJ,Y"%!"=A:hpNJ&"wDÀȮ;#z`(rfSӶbTꯌK.Zx- eW{3,>}8,H¹R*vO_W*WìzW;l&lxXI[;E5_TjT5ƅl0[}P~f8J)@DGW gY,gX` L8!yu^d=9,qr~˓aӒ} ę4`]M弜N^g-@PiBAWf>eY:s:=se)nR_$Sy)HʽL}46 `wOp5j1UC7VúL9d?6[*kܻP=UQu?Lt)yrwml􄚙YYNǸC(`6/q?i\}*r 'S2xt$#׷5mF`la.YLR,c0A[׳5-jD5щ+:Zmu7ncV];=ignsIqpӼ_]k3kmH _/ca,06w\.mZRY~ÇDh%gÚꯪ5I(.oD ڧPԾdtRt܂+qM}6m5H ϰ9{!,!K| ^S6'[GT# `i:ձyz۪oT 'OTOxǪ^tF7l3OZ4ߝ9u`W˒EDL!7X?L o,X& LDc] J.yC4PK 1%VkM-3BhcpH]do09U'X&MuMaH"@|I cLǦUH ;u*Iv!=eMUul_[F|`C }γ-w<~>S洁  E`gmN' #M]# #pQ AOɂ0QjHSAvL#JXXԶB,I,Cm˕H- U"[dlv]2EeKk2VSrgPuyz7g}ѹKlWԣd[l@v@v ČxjAxu i4t,t`$IBka1VVu*MnſIJot_fGEGEѶS->g'± Ў.Ӹ0lq]Yt5us.ϧGqz|5s)-J$,]`VaC╵CR(} %H},]VROϙBE9QRSXNE-M Egfr]7m`+r/^,`Aqje荋_W$2&2Z4ɳO*c",TC%Kv"Dٜ8ݴ'Z}{Dq6=:]Ot`Z?ca\Gg,bx܉~f^~|ohO?O\79}x[t/q~I{bA]'^9Y- dSY r_wK^l/~Xx0|'u(0Mtk~|AkawVk0mÒl\&oO3ojqm\vˁZ{Vǫln|_L~z,='ޤ٫D5 JT6.O'%:|Yp.6LQ&7"NQ0Tx:*v mopt(Kю; ƧB|֠S .g}. #OZ%IPжWag'dq\ )BD"` ZK05?sYm>Eח’o TA١9KY7|͛Xz[k2|կoTՍz!VD,T+LHs(7N{6͓MD:t.{igNi(d0N;t@uP#†DC8ʘyPM~W XR&ˢNht.$T!%KĤgDPvcJ[+618` -F8ÈB cgtiLm+4$v>_th|K%S@؃͋}X{06(Gv *ß.:CIwUOȉz/d A@P.pʇ`vpAl$JA*ZCRB'ЉdO~l}HGiH9*_k>6IZ 2^D5ز TN2d$+rQy(=ݽģq-97/yߵ1۫YCkdS<Urf.Y&"Y:W|Lֶ u@]:aKZ;zei h2o^Q*iioBRVFYM[v*O-*A{H @d H*Fj9"u2HQp&7ƇĹJicLL>,lB"d)L &ɺ8rFV*;=c^tj/ۯ[).Z?}O'ũk^%Żi􇏘9ꐇ3Ea`;HVh)M{sv Aoy/)6?O~c03}x- kVCsj6c0+a|s%j6ݡo=G +Nx., l5bS`@5?|g{H2]qB&lVsοddèq= R]]}{fv[|U /ak<>J\N?_osjxjK~^eZޜw-ml7\͛-gtZ=[7+]f +Nf9N]uvvA"_׫ջoϯlvMW]Ͱm7\-fz$Z\Maz~~˳S殟y9yLk>7^^ߊQzU s1x!|S>@t1 !)CLQ!P,ڦs!E)ZCT[*bbVE&ɫIӖEК'&w(JRq^2u>Vg*ZfoӼ)^B:XTN{e@ t^ϙȑ nM_MPM?ݵPr3c`isuscc~bnTkA]N/NJ+tEޚCPRT&ٱn>?#C}ÍV6t|~ru/8j|.Ykhfo->m^PVD4$+U!D:A,rz3I`+zRM7-yۢG3_ޠM*-1{:+ L?/s}8;t^T7ݚg58ܢPa~+jК5$:i[q V("75jKRIn4+_Q_s]qWq ϧ}.,M?f^~|otz>|/r.sVo٪WG|oa]BIԵ@dNh~ /QȦ׳~Xg*_/^ԋ5gUWP6iu:ɫrs+//mXدP7ۓwg/X?͖͟eg瓟2]^MS}9_@%?λ|98gXZ .Od?=ފp ?On`]5pm!lIS:"v`!GhQ).pNolH߂[oc s޹ Ƥ(IC&t)Қs0Q?{׶䶑dOzЪ+##lLg<1v쎎J(&٭ß5?_Y fy Ö( $S' yhN!6ZA %,gXu=q&W]_uF>,ll13olK`޶3|L)2"8ї_B;xbGU[לQm$ +? qDF! HYp*@Jr0h;#?QjD{[l5Nx޿LNhX##D!Au\a])R sH Z"G 9q 3zR_W:dGe.mv%Y fG,>HLKvwֿA/i^.G}}=~R]-+LՎh[T~H.BE//{TNE/dZlR3&3)ᄎ)F(<ʍ ngLqxL@ &g2Qf$E!RHnKam"5̹aYY_Ogy^Ùt6h~_[-mCWW|r&8vxWDTɄ)&8U$xg)" ap87{B"ԃ"FgFFC4R[V[C]Ҟ-U1%X6ybە9=x#z~Ҷrێֿ.Y1Rg`{\Q"lj, ͝M( FVu4\%>[DsﳕZ'{7|pB>:\`0 W3DžY+T_p+(puGkp}<8PۤAbM6Lry W񊈁lwW/_M]MϱQSVQQ2`rH{ eBh/M)rȟ/BI3ܫ79~v_X& %ne&eԽ. ] `n~1v1]0g-8Q roϟtj)ł.P)<{<囦M2Fɏ ѐlBZ BB2S 1HI"iP$ A4( A4(R$ A4(EҠHI"iPrM+]A)/EҠI"iP$ J%1//=K#/=KHGxxGx^zGx^Qˎ2J&2Ynl%hJjJ8τhdKtI'c%˞'C:u%Kn3' [ڈD$HpG, aNɻ$`9%(PIolVPqC}$&h. ZQ]Wz~t>ǹNFC,M6'dLD􀟝w+ޓ #xLPR(H #$EcɔP:)O <J\&< PRWSicy\d́ڟ"ēP^A6\V#V "}$V(< -*:4Sp~E\ D3ɰl*@~U mc8 A4k1!L)y"a`$*b %b1ijX|0 9T)*L!cHCJ*@,pDc 0<00=-#;PYF t14^(뭧H6*8ӪdJx'H!˫ tN`R IQРZdonn츨Ŗ_ԫ|bD 16pK(x X lFr 2 t*"b3 ֒zV&&qdջѓ,4vP6 [0z8#_Ǿ5yuyqbIj,[NIſ9Om| q:hS^fH0vdw;;9~X~ |Onod2{p?-Z0y}|!fg*v|?Zg|M ͺQMsehglNx6"_'ND4q̧ճx2]~q󔛃,NgOF?煟sD汞a-V] W :'0849 Ը]^%2-/焀Gϱߜr;ɼE?̇χz=E.ӦQS%jK)z/>0c"jfHl>Ӌɻe8n߾4#??N%XsZL9Yo#H.As좲!5/˟L:9vR&)ӵDd腗7sKʿӸ'd6{>,%%'r&^l$;Ô  _NU$%Dy՞D M|ȳm$ۭP3;d23ZcJ=Z@Ƞrü.A2u<J/x?@7fd́MhD{yu]j};ؤXlUǰ{{ '> 7syn/^V'|d?8*oOcŘlDe};[͜R<=D.dclDI]#~.'$ڹg`v`-IAz(/ٺYn,<^u O{:iV%HFEOJZAT$?Juܳ<}z]7˸}ll{nya rRO8@8_yzvY8֊#fKɹG.e\4A΂2xh"H,5ևf2w2G22JsUk5 qRaVIEFݝ0ޮduՀӌLg{8Wa3R?_]wq.~Z%E,'~3~TAѲY6`.Ʀ4" c& :ۖuM\C) [e]tn[:tW lƁS?H\zjIȊԵdz.֗,RJnSNc'j˃H_=Tkwn{jc'Jͳ#eo7I )ȤUHT^"DstLR,=}U`p@"o RS<jx$R)p[#gKr.MNz咥(0UPd`0M1bZDs#!k ½ l(ܸs(BQ[)ļEU_T9LZ1J3A3>;,vYB}em:hc(^+)Y!Z^grzH8']Fک²2 EY{pB σc\YEdi.qP<"p`Z*Jm"r^GϣffAd ;ѨSG~LzD VfT ujNNi )n悪NօAA5a]Ww&B *~s?5u^FdQp-*dL]FKe20-$!뗋J%H'o}>[0)juOnm7˛]hO84t7-]ƛa)lySAuXt{y?x0R((1?`b_٨^S{q.f?zHw73L,A=Gm~Bi6Js4^I0hPϩ2lP5_'C'ػ:G&/7߽|ٿ~ٷ߿;ã9{[<2FIj{4 ??Ag]?ڨf]K [tmeNrmyMۯVKr2^FXqJx)'@ f:1%6xğ6$Dpk-.I*q%kmqCT6!7$#a&\{'L y682ڱ((ϓFS!(:PE*V9 ݝN+w:{ ڞ)@y 17;{3P^sgʳVun;䶣3siʔ3̳wc#zsahXUS]W͟_Mrׯ~OѻSÏ/ÛGl%<9?PZBT8rA~Gz08*J-l~RJF6juR8IbI''?5RgjRh"|O͏?Tqob?*((o]5j Xݠ66'j8#;۫}1khL%Bqj-T}6<8cΣp%}{xDˋ(/PvI2H_,FH*KMLP<%{Z>M0Q[cx( mمlc aDǹ  ђm-r>IĴbW7F2-?b/4G :f+ 2bΉdp K."KLgRZHXl&s[ܽ[l_'p|f3.3dLpl6MP;kCs{V4BQFB#N e1#!e Z<]'GYklgw`gܰQŞ\AqAHV!z W9 OFp 9咔Ѷc6Yײz 9_d7ʊax0G\AHd'NeTe> u˩^6ԊNO;|z_Q 'xS 994rtZǭ2 V^'`ouL27\E@LyǹŽSE) 'Sf2͢`cӒ|RpIP%>rr%49V,v[N(}̔4a>s?dQ@ᑬy6hoH_ 5IMgyU2z3*"}:e;!AyySp4)4r[ݎ"FYM9 `oDk\f=qÛ˩IQLn=ߢY3K+ݒ^-^bS:ږȍȺ_S}I4"{;_(5݌ޮ'oon?;G3lͦA|7nݾMYˋo|hܼfO7-Ƽ͞G'BOzOÓinzw|kΆ?{u[/5qo%PQ\vC[:kpS@e;Lgi{&KՌ-mYOo=\`r\ 5*F^K,LnfMnzR=JQf&s %HbYdeeqVEйKhgWƒ*yƃJ!2=x`9᳏6FWSOW;ż4;K7Dpׁ-p?#^u0 7frO b:Lxr˧4L `k8PgB cA:xCjҾ> lPI%urO[){5wSo|^]0eլx= >W7qgczO~^vm@mP&AڈT\$ey˾dW8-P2foe61 Tj?bWEcwpKxxR7wE1ynt%SH !HО}lNh-Sq"`NZtG:MmՒי]ľ v*0q/}\W\=gWRWqy-szK.Rvt+M EIfqqژ7;f:QNҿ2U˛o` ʚn ͛zd4IP&s FƆT0bM3E퀫eU+A KH!)pHL09+IIu0[#3\mbM>C1̔o|BLk*m]e3uJ::CBBb5tl ]eJ42) ]] f2\uњƫ+D( Jh pi ]e\52J9ҕ46=*S-tW5_]Ur::CR Ul33p%mhye ;:CFh]!7Wj W2ZNWƮΑ4ԞkpMkf9dݺ/M/fҦy/h1\SzMȕ&@ Sm݄)!ЀwXS0pCCgnPдh覧kռ.V#F,n\Rn7M_5tPs]s`pyKUߎ7˶\7[F@ObBLFp"AbCĩHSQ>i\TN1Q$YɚJɊh5SMd=Cʉ2`[CWmVӦUFi::G<ZDW0U{iJh:]e ::CHI[DWUBW-m|0QVB֎Έ Wsnv&?Oob4%,_ח{;x0~X#+?O-ݏ\Pſ( !-?l2 Ǯqs#6,0l= F~۷'.>>.hyA~%>Fs~@͌w,IQ)`3dAQRjpQp"rny­=r`IG^N*dj|U+cR+ī_{=tYƃ ڥє5!IET;ǼXߢc>䵘kǛ i,B+R` X&5%.@N%eL*2ψ>1'S'L'>˼~~{VL) >:Y Mmqi:7k(;disZ~n8~|}a0FQsm{~vnm-΂S8Mgw)ͣpu(손#?4zNK˥>Zj)_`*짋-zP¯?>^}bY6B bC;G/]AVDw3^-R-PV r56"ft3HQ+_(r'H$rpEH}s߾~u ^t^Ϲ$=^=zۡ&ezS yoszw޾ߝޢ4Ef_M:2x摊-CoN߀Wu7:_q~B&e \&b]yT]Il`>Yϕ _#b}:mnZwSa?#ՠ 1)A^q- $=eTD c>敖>L@R>T1HL428o3 RJ-(O֠UɚampòY?Keb;% #X- mBPk"J[!9cAFd+,>z}C奴Xɳ].ҵ.&Nw1rM~w>]|wWނhN'ON˙Ի7egR'7q`z^qmlY9˚ލ?Ol?*-2| g:&%~4)|؂ %(fh?],eb.ՠ Y>(r(R}Bt<<} 7v6Fְ뢲ikYYATY(JYTG CɊ,P٠꼽_Y2: cg7qȡ,lUmOks#.W s?/A}ÈoG&Q[P VI/ON1h-٣{dMxyyn2Y)kϬ+οJt;花as?ita} EL]`9u1O;{LWiv䃾]eZvW>yȹr=LF-Y`l e䞋_xqē񛷙~R,,ytwG}nn[Cxt#m'i*G5V(^>rzex[ 5 11$#-`Cn퍆TÇd܎7r;KCn^Lx2ӶuM/*zo[_/:6CAJC>a3ALjM<(ÁeF2dh5T"QPl'UAgUOb47xc򨵈 [20CmtKz ah8HKU60)yH9&s:*fpNa!h`i- 8$XKq& hȑ"$f"SpR֐Ǩ{okx~V=߾7[c}\qxF\naZ"dB`o$RXGd2* ԝ _/z>sJKT΀Ft^',0ȸvR6g.N⤕6{,{ C^gbfhFhm֪8$FMf.[I^ݴ&R8)6#MXh;)iTb4(K朄AlwX\MqLX5;rzi==a/lqw&TnA>`䌢O|Jjwf!z;9c#oۏ$EduyrABDM ڴE my*m %݋TEQMM,!}>`I[}Z h4m-wټL?LvuJs_e1PøیOv@]WjOK|$ P+Q*tL:OoYM( oN;{EnB!@dCJI&bN<8NzKz <+gsv۝A;܁(a܁ab׎܀ZJa6dL29*I܀Cۓ;ޠGEza(һ+/;'mtRUXlbRD6ɇPKe&&εq쏵&f+BED86x &*.l\t*ZNnC"LDdυ :!$5 am"e&`ΘI.ǂcl:wU5Oe!LPHY$&^$/؞6,5 &;$ R3fe9 >gcxHA3SZfj^jFNh\(N˙ȒB;пN_.hoZ4#mL@VR8> /w{fp=c.Q%섖٫.|s:Zp[l xKQn`xVIW[O>Sʂ 骷MW!vCkxĪ\^l8L 5`:gZ;Ѿ=z?4 K(^ɺ&3Rz1t/{g Б/{f5d" )_1 +@u9/G홝u z_kާT8y~1'ezUxݡٯ)7lvt]tl'+bxYr&b* H'rM2Sx4$uۛ+yiJ iHޤ3X0),sJduٖ$jtkV x>.ˮwT~} mQæh |$R ɴ0O@¾vwyկ&pqϿtG/I Rr Q'_ȆuK5Ԓ% e" MA%5۠袍D9ؘPAL3ȚP`D2OJǜB^TE]d :x6kΦsK׍#Sz=x1=EbXXwudS&(lIduP[ :t[Jxe[z'256~T 8Fi9뢏]M;Om?i^'6b?-<;{^e%}}lghv;񪺳L.[`gSM޹yH.6}-L͞-*mc %rڐl|bƐhp)z䵙A!J"EWrQEmbsR1P„ʁ2QPJ¶@_$}OE$#vgُq:ɗL̾vgvj &)O e9, tHfqURX$N01 )23C lEk2d)L#co1* ]t6x5{~c_3/"bLj"xcBsb 3,)`|D*{#jIڐ4xtƵt-}N@31DŽJJ SbKZIH3hLjtnQGEV>:;{pq\$Yᣦ%1EQ3I@c4yEh2Kp.]#zk݆Q ڏ TiIs)l}F?-~_#5y Q6HLICあmϓ~m~X6&a˯*)݃y& (P0jcQY+)##td 3 /I·Hrn>eH3`Ӹü_`y-7ݜ<{wڦOo[WvyzS#3\~i*:U׊mC 6ؒ2>8Qe",xTy W;ZclKː )$i\uL)$"̵eKʰrje s"t'D9iU$Ut$'p=BΎ<[' f+ͪ:ᶏbq0Uwψ6pQ39o6g<;C Nt26RS" ޑ&o-D>[x<@N!$s )DV0 !ԮIDŽEbH>Q>}TJ;$R"y*]i\Z 8^<2r *t6C42ё)з.T1Nվ;BuC@~jІښ?Ufm: ;]j,|?/a)ҥʶFvмqvM>뵉:8ׅ'On/`w񥠞͔2YAoGlފ3};}_v ;vz:n[}俽މ`cwlVV۷`4^P`cQXxѫkF?~S! Bo6'g'oDf'9quL{6 vg KzhܩPwvJ#u탶3qWՠXeBss69].@}n̂Oam]aFkcG&=c^=pW0 <։X7h}~ޓp1o+.Vb̿j܎N1'ltdjIt[|'f)e6KXl[M"[`dJ]1fȠtPu ڒMzxՍ+{ѳbv:zqx]7"ɂ]$+49],?ڨD69\JJ'zS zUS9iZi_;^[^{zI yꐫHP/36qP1ZԌ]4mqH`_c} 06Y ,Y !0 {޴i i i.K 2z1iIHV?{Ƒ@/uNkp6zo3^Gڙ/C!iG© 1L mۄFրmֺjr}6]ylS[ZlWi(ۮS 8oYNMJ5کeiq-()2-ةOgőқZq⚝3OȰ\hcUƪ r'\ m*Qb-$fnFШ)jcM ƃjǹ2D&ϡGS0$4^1r4,gh3 W6+v:3%]G~9HwrᬪK1: [iE{kف% 5 y4T&1Z3䜀$Bz/dbl|عlCC; g )&x$b5QSĈx 9ՐZNOlERl4DFΉoYthZn:K:DsHރg\F+hON 5笌!p/rb 7fi&2M `g:C3OM <@$GSԮ(ĄXu>~)~Yt[77:veFG+wP FK t6i1ſ;~'sG6ƄN2  )C wzP?*3_C2=B9aTI)v4w}8#ƞ)'\ 7W3\\(OKՄ>#7\@vI3?=U#l2rc -ڝ: [HDx<{ u:GK%e؅^,&ođ-QvD\g]?/q8˕ۆ's5ѧExZēZyWx}9>\x%FrSVĜ~uOf> #EЯol6?B-XffY"9Ү? G0b^tG6}gsH}N;U}UU2REC"c0 9:=Š_Q9J2U<~~o#<^}>p~c ݼ e&D+`xn@,_wR䞮J.vw/PHOOOoN^7sk\2F$p9 @lM¯[ i7\i%jۛ65ѴMz2K 4vR,?h!@#W+ KB%V &e#ѧ /F U[vFsDSƀT4W +}n6|͆)|٥ ) xT*kˤ!?-uAEw(b8ݞ4rS2F2(;d{^ض9qWفJW[ή>#?wv2Pt?`x>9?P?3e^䚯8v;j(vT27墢LV =~%g՚y~J:zx!"b?LB'TV?[xy6,.xo匪6L)~/Uoyw&w~JONnqQY/Q lx󏅟T(^ީ <:,pFp𩛏'ԫ5ƟG W"Wp8j{z\\Gjj |ZP wv̄a{xMVgd:a ":w'U uv"hВ@XC#%LR#Z۫*v/FFP",&xgs=;~s0?B~0HInc"Ô\ !yU%9'S(iFEf˹ʯ$(H9w%X#KcC*s*,T%6qF7N]N֗z~J-@ Ly>ӭXkY `f+R-,-#T't™21T[/T=R`joL" gjueU_2̘\eU&}+QpdW `to*+f1SRɉhpd \erԊ׮2ծ^"\ ^`#\ {WZw2 Zz1p% 3?) gVE_LB ;eě9x`Te9YEnDH_V\!9[bs[tYmgYIgPӯi[uSVуb E)=JKÐ]2ϒr^D+Ey &wt!w֊:J.&~ .fP##Kz=E޸:Y.vl# 8 M\חV|?(Ўβu]~g+Ywl"3uDrscJhPh.9#6T\~1H!V9kbFA|vL٪િ"SC}YnI<ŝxG~+x\շ7xcF%QzÆ"gXۊ8% m׷rm S {s5sPJ&Ţfn]~y.&-+LhTK2(5Ъ[mv6ĎfAKcLpBG(sEe$C8nP;%K풥ԸJ/O"vlr޹ϬU8 +8_^v8*"*bApdBE6% L`";Ky@UF kszlRoHhY#TcFxA.it*Ř[6ybyXY߀qZFvvooMsOr6EeXcW3Y@4'LL}qԚwF* eKtAÊ4x~G? 5s^&3BJ$R(UPqoRz_\gCŞNQ09TZF0Qxj 8YIZhHL451$sD Dy &4m5h+$a1rwȔAH?6268W2 h-$S(Qb|2eLXnDR"iͿBS(t"8 jc<,4Q8,]oyzWJh4'иl_Bcxx]UZQ <9|k''R8)k/ؒ[۫=E(a6QH4>xFH%`aYXBs"*_%9ɒ:xrBd 6X( @ڙo#adlNWi MP5'Eoˬ-'Y?\Dpt=C_?P^op8b' YP`k%#Ƃ2`rb =je\8 س,&Kpb&&1@"MVJ7]ң)rFl7sWP1V-jeId(YEP- 2G޷Фe%(#JaBzQ3V4 &D%L{ceu$-AQQ٦]X#a<\} 1Δl?6ED0"[DG,2ШDQ PGТ50$ FQU Z/ r:UhʐED)!EK&IEyPRרDI{(ns*70"^Ej8*'µK{qɆHøhZ\lq>gl(G9%oG)pN&ÃF< oq)pqW1<|x6 [hϫ58gʍv~ u~&gm#B&IuhZ'rz;+w.Hr75Ӧ_y`}'pXflҎS !E HƩJ(ɅH a r~ jQڥ xU4îG] _>}_1h:&7ȴʜچ@*E[-d4I"z(bSImOB6ngfGK j**HhKS녲z* E lTӪYgJ qOH!g̻8(IM"*9KH004FwB|gSAO#E5g?{׺Ʊ_e?'9CB8k;8lB_-pH''H]G2H$j]UUM]e)e3}0чGpAh)@hA:X2hZ0" v3ΎNl=;I0Or? T<ͬSEqB (#IN$O Ӿw&N9'8>scmR171jf .Ɓ]R):qJ9ӯgb^tk5@hDv≁F绀e5vKWoYg'Ư/oEֻ+1 p]!$TpاDbӓm>.<[6lPSY%,ptLbl=jfgp<4むsyZ֨$daL"dgiU_qiI=¹} E/<- @ (_s} ф#n>r9%N/|WzàR-1Mj?~㏓~2&OЌ,c3y]~U O|+G92$8t>|*l~xa7O_JVIvY%*gE#3N p0A x>/FaȎ|zG)ޯͲ &f|:HK^AkxVFg K2G`~6{.|G10% K^a;NshF_|3-~䜵 8U7ϗ:( Ē{j41؟'W__C8T1Q0`tgJ.T 'lrjLO^)vtWFjx.d퇥 ~6yg0Ͷ_;s<Wy!|Z&]x|-N|jR8 ^{XOgIT?}׾ $@̛\9yO>-{XYHGw11:}x>YT/Ђm"zZj[ Tt.*'pӢtw~_xR'ƣFZ}d77kV~#yWO&5=l:CNEJj5&1)\yF ކ륛3bZP-y Q{Á;Y_)TȲxfF>Bzh|5̼b:f[GΉuO}BI?hO٠_5 ireXil|`t;MQ=wZupo~_@9]0GwLBc\ ո$ޡ٨mnP"ix$5mچjZ]/ [tm՘1 &?O&֡9`֤ ܾbMDtaYojoV7.>xRCkF(#dBA`B2ST] )k=,5D΍Ֆy/s#\g[pTFF [A(`(,qXic(G##ibYB/@ߕEݶdlo3w ^VSgQR v3>O }eh dhF drðaR&RhoJBL%`pD eѸ,Dmy0uS[Q:FTxa$J3dAA(S82Ru1nŽ3:⾋!?[~ ɛy>)c"͒jS Jϝ1 H*I Ryt3ԡ )]AԨh_72wʺwob=uٮ^tϤ"\)ޕr`B}ϲ~k=5zdQ*S@^ޝ| @ߐw]-gFo6X0nX**p)ུjoؠZc>X.tb&~Sh)}":LHѓR0 v>A;(@r0Wl>UE d?)" 꽓RL]TU8Ó^5x C]6*ՍdN3!Zrˆ.'w-w9,r>`0R} G*Fl{N,AsD,doR L!m/^ܕZWC1bjΉT?tz&0yPSӺ'ڼj,x<5c`wGR\H1ѻGfXa=b!J67 C6#aV7~H|qO:.]j9x0hr,cJLqѨ$7ijVɼZ1}O]:Ecb\v*)C[H-)}GA;֫/ہy)K@Oc(mi|*4uj$Sy$`vOv#`yo}~?>g?铧9x.3֓ӷ&?\}hɻk]ahEur츂IwQ X  ٦cycĜFґ݆пdUI;լZ$bH;H6TT9OG36ܻ.e? -RF8eȑgF0`Stpx*1zP{&+L?C8OOnE0Q ~9/ Pȥzͭey;*W YhED4JΡ,Yl=\kw…t> m0Y½u4=톳m}y,[{8[)V mżR:uhy*WnhT{eĴ騪o(|^&jm? Rfܳj"A^X6L5$;js?z䊸Џ|~$Er߆ށoGHt)c%C)_z5Gtv1pwTIoy%iuxOވc Hp)q4"@D ( hz1*<QmGs+:2,"1rbrؔ^&]tFj-Hަ 8d ]U#S&L݅C:Y!^>vg҈t1ۇ<TºSg?rJnӢ$JђdKk,)QV8dA5Dv& zf*#OpGAA4EqEcjŒ@9pU'&XuCOۍSpMϫ=ڛ?cwho#9{fܳA}X9P^-?LvBLij-dRbJ)hw 6 b ؽ H)Tp`quk"@q)#6rtdte6u7M-,7?%Vrp $ sԹ,Ap(ؽQRJb!8vcnAo_^3썺!jKk5Plm݌owoҘt[*'~ԁ?$QyK`2@.-7<8ACYL̈́"&G&%|'pYrd&#QHYAaj3j `Hk1hFagt A,bF%|Lp@8() "C!Cĺ[śCH!GxF;د FTr2C'9TYo^Mk7= CKSkeGtu>y75>J "O"q%<^Z:u AW籁6-;A,7x<*J_LTU @!#v6CJBr'pdf77H r(sMTd"YtL^Tt88>~pO p@԰2Cy_Ogk,|klz774[_NեXVoNݿBkZ悌_nܒ^+vҚgj uʉIj+`w1il0m+ceݺúW+չq5UtmWhylEm畖x|A6 {♗b%YG^1֋NmzXŧZ{#ѱ#;G{plC2avdi# ޹-+efzR8Se܈Ud"]\ɬ3\RÙ+Pw&;2-7 ֑[#`UpE26:K,Uku2 LLjE"wB; į5䦣!cFYwmmJ[ @Uy9U[/'OT 4lmI|Io"%ZJcEɞA_t_GBTФ&\ 9$m-Ĺ/Ol_~BXd|nmZ{:G@YYNg:L ߆_֟-U4&^&َdgq" y@,e Wr{aw&xA4=Ww]acos)-e?tX0. gÎ:V!l{p;xxR/lW-Fl&`=;?eTx =CI&Ԗ;_3  Bn,#P9Lu =ޓ:O\?gHN"pL~]܄GTץę%2Q0w\ryySpɶreut2g JHgpR|y|! [={"Z5rIMARl&.5$} 54IAZ꨷mŹڧ>I.+J_&>Ԃ*[+UVt}VCOAsHi9qG1IzM' ;AܗsA)&osk4=l֑2*a'I=E)NPw5i H5F]&-7'\f2&Q-QujיeJ<[h;\-3]Ė$xh XIbѸY|ɨe& MqruUg2D}UEr j ]r*YLzd aok FĹMoF+=l.7U_Tcl #Ws}[K54uPF-|LQZ1lcG} ?\)h00H  5[V"BpvZAa9IsteHTo5cK $?8j~7EjƜ!yd 2: j'88&ul4x&C]|0s&5 ^,Vu5RRK HL#}\uϦb|}Ϗl|R[G$ٖA_qKп%ºR[3&"#YSLc<c~S4F7A;~>=dXrA Q@l b)6IL}|ib(;K|T. D|wވXp:]18k[xhN܀=e/a3Cc)ܹ\丮y:?H2x5'rbY>Pa WJ&QӀ%jҬSPZ4 5yL|}KxឋBsVMX9kβZhٷ{^,|^䵟^|m~:eݭcv_ ځ] u࢝0Ek7C Ru,e tO 筥yFl}»1} l,& 9)W(4c灩*᢫QOpV'cdb*ĜUԦ[TdlEhuYY{5 L;{o~ן_#f} eEy3{")jMGΧP|1LΧKC4:1:h'O%cC^4) ,5UC2ҝڣ{u>lYV\I Jgꍄrm' 1PZoC Jq<s{ ރ #YkeM|ofUxߍIt79{d.b؁c:Y.e5l>͞Gb稒ƙ:$R7j^<;G8;L&A3;Pi ݪ+d͸XiM!!BL# ֳ IBbZ)&BtJ5 &dKU"~0xWgW FfW46 'ˣ׎R+>Zt#ߜ diJ15A筫i_zRR<9"9Sfo)x z*hɶLűܠ_CRc)ah[?;Wؐ;υs5M 6w4T.tBf.[%oB}$.Jbcr1%$jaŖsKNjnژ9 hZRۮ֊1u\ecۘ4NQ @lVT=8R*`J0Q4 66!9$"mk/x=OAvm&DxwXHۀAV'-64x_|eg%?6<[o8!^IzeV!%lH0meI9wJ6)CM*j歍6.m9L.l6j$r5p=1 #GR*Bh̸%49 J6Jm5R Ș |Z_bFadMȼ51Mc XtVI2%ͧb[]εF1ecx{ aPmfN%  YF#FBU$0ÿQ_o4%p'Wi43٠ܷg捹17sx Otu'{w6]MD3$ek;4]tu:/Վݽ>}ævgnv`k{k`7?ǿ/V'z55=~X֛μ96Y.}%vzc?~8ZOԬ}q+yK>M1X} rua>/NI gۖegFϯ  _l ׂz*\soGV6!tcN@=?ѱ\`< 2dtqlzzBGV>@Mjʭކv!^؃=Ы`1^2A  #rW]`:waq⮺n9z^<R3㮺⮺ Th^,NU8qW*0ywե]Dwc_rW]x8`PܕJKf{W]ʝuvW/] \V˓w}ݺkS`8ͫZW7ROONyɧ\~_D?7-Hyv@׬>~TN.:f˺7ZN6&ܙUTm#'|ةlUKq>Rpc!xY0PdOYUJŃ-` oy]DˆMϯz|NNNo[=pbSUؿ|[ kbm~խ/?_yyK6_Ճh<`{.^咔v}+]Aş eut2ReN`଴[i\.):wYB=1]{,ke5HMg$u掖-ck3肏%0\$T:jCC %Ԙ4Df] dTMY~4qnsŧ>NRviuM]+š MC{D!y.RjDr$b~:a}Ȱ99ԣD/>$&OUlB54uq0;_s jK[;zm6\t"~ȼTȗe5ؿjF08"ٻ6r$C|? ,rٝvd \bŖ<~ӊZR%{:qu,>X*XE2XT9jhpפLQn =ͶF~glVpSѓbܲM=H@V"d}@ɒ:P;3*<#v:>}^ 5\ 7ޢ}o9#5Xf7R;Vn } O/~- ZIr̸JaU"T3a7:u ) HR/ ԋ-xfhwNiiD7+h腈!K c2p@93ܪ, ,fx{>Y+$0:Y6r09(NeԚ8A|zKU8vĘ,穥o_~i 3mh ep,L;`k2#arУG7[)%w*(ƱH:mGyg-Ng}BV_q-/e<@iv́ən߾e7UnҠ jOT&P$u hՕus\~HwvcHY7&4fYt J($p5Tb`LB1j"DpEɼLQq=I\RKf.s!egakGETp%r<LuA82"d4H^Ć0'gMWm+0vX# ?(b5ɤ-=o锃q}Z"#oP$utWͩkNS($d=+TP*$/g1f2MF[;X S1Tz{kwN@ s'\`V@bVv|BJ)JUbRB׬; !:gk 'Ο!y}_FPv%.6-5qM]FoLF7~QCVB~p!`=xGyp!~<벮Fl^.Y{]:rWȭufuYRgG6OV=P+znyxxGwv_ڛusBlݿmM˗HUa4n)ot{yZ ^.ӯ(xuww.r1q TCo2B<0A9T0^ ɬ˭5qvc:}տgM`v8&>1}IG_ᦃmT>0sUU#u2</F`kT"r%cNJ)*(a8TT>lԷ[x ܺ*y܍n5vYfU%8o ?{& qxS<> $x n7 {z0Q4ɳХZ<]ߥnp8yT&p1`!Qhy!Zd(aZI0)z.17$|: z~iE]mgȄq3hVЬX5Ջ;Jv4+ -u|լӆ[JTFfĊs*EHʕtFdJTf`٧8f-K>$G9$rBtd)Er:`ZO䓾m,v p+zô\T!*#YhIGq,SBsrq ]_Aj TFMXbpE)M [ecYRm3%Ξo!H)ǣ)\BOdB46+}^8Vӟ\W&ps:E 5ѻ2ؤWf(}P͗&:KXdA[LpQD'lc2}"&愳9u6Dw*qIh'F5qm4dm"Bbk=b5X[ic2,@Ad$"Eْ @UQXe:֑J3;˘mUAtt?C(!01d^:ƌ.s.% 6&iMU5qʂ^pskAO"i7 <#{ZӴpRĐHxKZ\7cxJBe-A' ՜iFZ&y#XDdhlJ3kފL&&ǒ² rlٕ߈ni36L;cαg b@]܇5ɚrUJ&n=VYdV A*Srs f?\{E.Аpy Ag`YFJ|dK2,״{ Մ*Ji_MG)7,W^#JewwCw'I:G]ߛ~:6|  Ⱦ`sm}߽/}Ej-& (Ԏ+ Z[RR VR hphU "Q J"}6-`*Q9x\JU"Ѣ0TVpV@AjHA[c3 >9sLNyKLwV'<3wz##P]_?k1cAzn[Ey{;S)lPo|A]TPrIn3iFUUVZ)tх* \tK{ؔxPL!Y(9iƈ@H-7Dk ݀gfXYFz]y΁fŪY 49]m/M5Jw큩(i1Vacl T_mK@JJ+2 aDS lޔDZ|{ZlWwعu5Lq FWWWvvH@Hh G.IKfgw6e01Bp-n$fltEνKeF?>#HQi Yx#*u94t9(=W>*%=v<wi`.'\f2f׍6C*N.U}Iܩ .֛r.Rh*$-u4/MSOB+nYLET;ioƐ΅9(Up5<-1VO[]n>K@XZ-3(넭C3*;4,9R4R ʈhV_@);/pl-feg   .BfƒT2%^;SY+-:ۨzfPJ?ȱ798B 'YNeRR1)蜤>$pSvQt8B )q`Wą呀3L!mZIϸhQ ~ NzYUJUqOO2[Fx`^Y/MڛM??+0◒2mǔ NbZM^_C2u[S_|>R: ␡Lܰ kNxd<|5{6AѬ#fa ȼ7gr!nçY2p0)snaHz7;u{7^D7zeD8j@Ar5L/QmN`ykM+D\M᯷O\OKU&==hZ.O~\>x ̾HB?WiJHo0t;Î_spG&7tThjV'℃a>Q^,ppu?uq[QTkS[/j>uTiy8R4- aꔟpXYS Xb.cA5h:U;l\T;d?zU9@{g(7&#(ј6:MNi^`q]-V xRI%鯑%[-Υq,gȡ\#sLZuG Ub\8 IcLh" )("b 1njy:QmGsȰ, Ƹ#aREb:D#iYm v:?]na7L4WĐ*޷˗d{'{pra]EJ(|hI DbTWR ;Z}K-jBc(,F[,A$Xt R: ԉ&xj,Þm<*N9#cE =)bP@*uLG <;cƌL"^ˈiDk45[!^RgkpyHx%ewH 'vS??mN=V[S8mL).K ,RA L%CMa$O,`D|0aI" T),VJ\aۂ}2, ?ؔy]9 mm$Tr[DfgSsݷYMk)3WYRS'E3/չc #*9FHyQtRE#fHʞQk %6N0+D{ACJGNSDN:ڂdT̮vk+-{%+WQ aqr2 ^R2Pi AKE#Tʷʇ#☁ !fE(k|@N@ 8䝈M Ҵܰ58T/lC"X75>nʈeFT=#xe:Y Ll)@-s]k ^sd B븆%·K+P:: 3[@* KsI&eFl vF!@W["'W:[%7EZEbϋWx™ &z#%g8V34 >#]ϋw]FGGog> [:\^WqZçrVs<Ҍ ߽~| G Q֏q/R_ş¶c472簼`J:QtFtՆ"Ig3V61]S܁ㅰ8Rlm$&AY&XFe\h}@v#fN%Z4c4@q:o]:iqX&j2o1Vry#|QS5}"NdJO 1,%FDg(!&VA&N!z1QXA,2d\ .6xAqiq0i-#JkL%T 2D'H'AbiPhl 58ۉal1|gQM'HՎ$(P嵕%&͈n^?lxI#9D-p6c:.,Xup!-B ]^UO<d6{H==|õ)B3003 $K3T:H4Nq^[BT!eD=۷oZU@{\e6YL.5 / ۹Y0LđTd$@Y\.7b87%%Di ;!h>t׍8o^FxçK|wu)x{r |Oƥi߹ XɆMLhǪy횉,]Rz\F^=x<*SeҕJB f= 7ZҊ1+${nJfe"6il86GeC]W7Ή7ꖿ -lHL246OS4D,4*Rګ46hs>2־[v6dObJ)JS_6R{|tiy&#cF)>LOo1vhT:渷ws.F^eGL]\0H}B[^LJo-u-swo lxRw]!^a†Km[<$A[טV&rCLȜ^FXViweMO 7rnS\r8ꅧ*ԟmUԧQ(d-9EQŔZ<#eIڔ T:ڟ^t̀HD-` y 8*##ZJ iUXP F:G0O-}FN&mml{@vs{PS'qAm|:2K-c. #U`B{3 xW"BS үdvz;R9W_', DoEaJV-aN2( KO)-`.yVg,U㯥2v0TƒZo e<(dS#ܻlEꊛVprX,Knhl5^UoYU󺌮!qMY}N;iz' ㇥вPѴށuO7zb>n 3vRpf >.jd.4?{10N.qs-c1xpb~hޏA?e?1<^<2@.=" oUg%2Yq8§@VStN8SUzdy,![nuaYLjAݪ9}/ל˞V0#)6gF\_iw1y 2cGB!& PpCBaB)T/>Bh1#JJp"Z"NW %1Z}`l_*tPmtx#*>IW1$G}+@pO%gGHW){ \FBu^jmCW"Vho*r_ UBf%+$]% ]%boZ)NW HWkl*{#]%{WaBW]1:B{]/3';eAjGajG[ FקlOW7zŰ]"9BDEN^ÏkO-p`q^f o-oc}Zk[F,fAq3\n58Xf@rw(S &#(jg0MݶKy$"kY"ML2IH뽑y\E˼ dkhq)W|419(-SUe(10uk ⽆P6\'+_f4tR.VBh)'L?(/z1)qo-_zeJb_hLO7<\ɊMRU?X N=xSP3Szs+g]/d8U*Sגm2sCbIׄqɪ)HeIeG8,;F@E svAl8@75};M}[:zN'I]K^*8y sYq_}wFÛwc3\zަ(jZ(E͡r{bi->8]AK& { JJf"sέR0'Z^|B,} ˆɌa\D#1X7uZk4(:ܹVHQ h`tz߭q~$t(mGІ,=җM4* 0O@H1'JBkzYnFt'd@>4"Ln~m6k%G@֥!X*1ƥ5e9cy:ouVul#\IăD*[dbɭ[%{N u}!_|؋ޭ%΢ĥ:<+8:?!W5f~]=DS-:X )%$/R:7-qLHGb>æyCf7R;rqT +Qw0fK-EDu ]w~R!;:D{%Jc"vLjBlW(!J86 `-rj-ω`APQeB m>H):h- ¯] B\\HAmCJlqhW2Hm |_ 8*BskPbob"JM@1peEHLk?h=f w= (X'IB4f#5gsa hꈀ{4s@AqMsUUc`wA%-.݃"\̧d&•XugȘJ!cG0Q61˖`AVࣜ=*(n##O_Ai *3<€[5(I(X2d*0ˀ*A<@l~0r3<{ e28o%Q- ePAdBEKK Ҳ+2K'}CϹ/j[" =z.Y<7`&˄104{J u&yTQ%@ `*BAn2S(GX`\tg(@HqGA($޴g%wA"=d_H_(~7& q i UlH)|$D5ڽ( E}ĪOWC ymV~jɐ_n\c Q8(m B 1 3kݞ03%Q>vy9S!hq!:S5/^/oݣ}ņt]1[\4VչJ;L'Q _ 0y/L:GT qtz$aۺ LR6+f>+:pqp@oБCW%k[5kUe êcJ"rB:V/,J膸Q*4Iuiyc|Hs# c-zD+}*$k5 uv<o] 1Mc`QեM,ԄG'7QL V1M@NTwF]6;#Hbg ҏa)v_,ռȓ}*)]'X٧Kе2"XEԥO9ŦuԆHT.Qw}pGjaB]I(ʠv?A7wT+*ZnWT;@FC b|@ .={ ,hE6ՌR#l1,O/tC.BDtQ:rwq6ٔ$ fFeӅ{zaQ}@ qF&"(4Pqw+<(o*rn~P>6k٨[)m9j-5d|MϏBiϢ;jFA zU后y ^^p Q6-w'$px` f 9DS@K8؄q1>y7} atw1͛wsV&U\` MPу4zuϒ62Tq-iVDNizU4z{Tfc^6mE8>R@~SQV2|5n(!/g C/}/m F #~,Et NJd^`]@7d!K H'>C)>VPme0 >-/`E^!$yL_K3Ip3-; R9uD:a™k'DZu ɂ:Jy!dz^:H--й瀇Pif 54gH2 8Bm+5})6zPez7V%-+dHp[@סl y~5Ƣ\HMGkxn!@ւJ>APE(AJ bUzJ\>?mД nFGk|T7=h&Y1[Gm=\"&b9֑K5rtIzPb:nHQ*"t#ӕAy#4Js# R V/wã{cȌ"WC(b@I>ו>(NG=x`rit'߁Fofɟy_w0T9i7HEٓHe88 vEٜ@sbN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2':nGN p9sw) tN "!Ȝ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 tN JyON BNqtڍrg'4KŜ@dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2':00yWN vPvZIghNt%dbN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'8~^>>߯n5Vn7o?}qs_3L ~d\RƸe~<{> !3.]q .pf]pf ])BWPG)r3; ~JDzR_4<ֻ+J FWHW##i7tpnNZds+EI*툮8])\ {+E~JQSWHWJB~Šn֮6ޕ*/ XDWi?K퀋 ]ΝT_g;CWC_9瓏/NW'.0]ih_NCyѕ?g} m7in僺Eh[햸yF|䵄.\W_tKz-=r|k^-8|xg;)툦+i=x/4u4PJ i$?y{vدnѮuK;~x|_]C]?FoίF]4ALJowۯӰ 5l߾|[$F~BcdKq ϸ[97~וC?]|^~7}AĆA꛿矿5YO:LݕCVy3F3adՑrB;p-PX=mщ9}>Q$ K\G(T1E*R]e&6Qx0xgvCW ^J ;])G.EGϓ+n/th_ji(-]"]O ]n9{{+-fCSޣiQn޿W:U /#ݑ~,6&8q(Is*Wy  Sfp69~LЪmd"HޓɻѼ 7F-rWQ7{B.vDW{CNGjJQbtut\]R1mNW@IY 'upnвsNWg1"t%R;+y7tpvӕdgtutcRvDW~?Š RՕ|t>UQ<pޏh/th);])J K{ZRvCWCW~d2@*)8'Ri?tpn hR?]'}x.W8]4KM/~rftN`tC#?:̮{k^"%Qo}!*g}Ix1킋\pqRG{6BDyg7GkwRqbIkmHnjbȇ|Jl-^(!)9"UdI;Dz4ӪǯG?N'.~E,-'v:Z7;^ΦrPOi//sf+8o!@z;y?L#"G 4(lxl -%Ϸc=;xs?]|G4ѻ,q!X lv"4ت(}s ]\U_q50[ʼne`; B銍-1r7nl"ad3nnÎ7/@db|;5=Yڗ]K͛NnX?fƾr\[93PLJ;{l>xFexKM_qS/|5KFgx`kXazTp[]!x۩>l7W&I?Io()];ɽƬ]3)-Vr7J6_,:<4U,R $!wrT2 B$Sq+2)m3D0`lN(.Z%(;]:NuΓۖ }i_W }~3YynVs  WajvS~pv0B?h[U |Ó= nrtk fQpuvUju&f2vnVL iXW_绀ڊ--P2LPtQWE2M2dP /Oz mEޱ/ߙb䥠=NN{{!sW8ǐ6/$Q9s*)RFCIr!w!-.g.A3,!)}aN%NzUbNzUvO}0Eܞ`˸Ա?'<%Vm):˸^4/1qOyY_NiQ%4`i/OF"gʞubn9r粧v$/3e]q| )T#YMe)d2yީEBήPt.vjGkSfom]J}ZVA9JFң- YS2b铄k2ŀplbFCLzD/N>Qv&f^z3xtFx}LG9 !jzN mGmhsD)~an3wQY`!rlaS}U}D?vUmQmw.do$_!rv璤vI G[R?:t'.z20;\ 9KR)D)}&fZK7t<6"(EuePTeta;AyWyjOyĞKW`/NqMhOsA XP*P%UZ0X35,ۙ6O &@C'e (Ƨ4#G9̆ `FҦȔT@^i墐,$1**GVL=As7Dݱu΁z_:~OPХCT@v#M5e) O#{#epK>z*zzR__m+`g L`yKF1.I(R9%{8i]VDtcsߍJ =6(Sˣ Ǒ㬶:/l Gi/&&#gy6I'[RU#O1 \#Zk[wvJ]4;LDU>۱J]PP(08z9Q|zMUbzM1A/TVs5U45'5*N `M8ccy =XlKSCaD5R>s|$0)l9Ꙩ UV:T6 %EiHG2DaRQDdщQ%ŘYg4Xr0IA 㐮3v*:b3{H->6Rbޯݝ9&;TM9ճαNiVC6K;w:&h\j$U~&710Z\*X"q1[kΨuL[lzJk# ClCYE(BEMEn%6Y*PmL:z(J`@ᵯEzDYBrIG4?uPZ1xa`霤 *S*آ,Z3дE.=1wĜ>W;Ő~S< /;,1ht20C̾MMQEnXʀ3|aݙy;tٻ޶dW;Cd32I0k}5%.q T,98M; D"b믚uOŨZpK09a3ZA$ư<čQ.4QMtUl-뫖x0^ c;*N`T.!G^Ql9?w٨^d?_ѷ=Ϯ r &o2gܲnX]y¾>| b0m={ g$#va^~=.6sK\"Ti]Le wOmO=k ݔ5L~Zk8>ŶgZŴ%Ǔ2kMrH֐'R2'C Bi߭ԅevG5*n:t;tA*.#ڈR˭uId׺RG#nonKLGfLJivk+yxox,[-.:_!ODp>E"xT8XhXԄYb]^Rߜbپ ߢ MP-y BZ 'Us|P@q5@ *p_u)j߷>zx 1C"Π5si(hg8G7h[,Q9 gZZUbV)  QYbH< fuUj8o?_` IaA bWX*'0f,D5D eАk?qEAD0Dv  :$Iذ6 HJHZcLH{Ͳü҄[g7OD.Ki0Set~~Ӻ%)QL-Z ~_bzܰ 2 >|lEzK,! @)R'9&Pg'dsNCN k) D|SvE r+'.aƣ ) h7Y8%3xC-@Dyo@Y"1Z(OJ42lM_ ] k??U8y<[^#Dk\$zH0F*Qwy @hw Uk3Vіe/*P Y)cD3.1hyĄ5qv+*i)g?+Z'zJ7ͧ@ûkKsr^%.7rIZ. RKfV9I޷#Jr@eLL2&cEѤ$(*!R,*aak3X[BcbZ|oEf 9;UʎUYX悽z J7φ#v1 ъyb2A9 ߃+ڣ6\1pZΟ O!{.2½geJ%Q3uBێ@!mOm~і8ۏGq j6;ںG^FqA9ZIe188h2w"un$L& H*BFfHZh:jĚ$(rq>TUڵ5qvaeԯrhe`쌻`[㶈hZFD#bɀ唃< ЌZOhXx@8AZTbpwZRpjҔG-X8Y (9ВfBṡ[g7">j8+եf-qqu3Iဢ{> Ah-'!p$ F*9H, IŻŮak#xl l]Ef9Vkҙu5c.RQԵ\v;ޏhq{-Nq7t0ƅqu=.:&Y.FyqQwn<_r}0VNE%+7xEhkU EV] Qcql@S HQaIřY~уUs3+|(ɌȗWGUzc{xώޒG|#7^<|'.=@"|G|\]CmN8w #Ccb}rS w0oIUPR`y::|{;͇ % e02nf8-lYdWE{$R*HIyq\˵J-&TU4M'ݢ@(<7|Z('R,M3K`[g4ֺi:;4KӤdK'ybIf sL8y-5I+ܚ3,Iad " ˤLQ(Ep %Z}no8>Eg z2*2C\!j\!QQz(Wh@r\` q5 EaP J\PʀJ0F_@_O%We03 x],UJ2iH q- Etud;Jr"rypWy<_ëbyWIU *aoTʤHe)\eb%\gBӏƗ9i[ݕ^wM2'oyZ7yZU;[X-;?gE443OzȠVb%~v;ɧX>bE/Żg|b8^ۺȏ)dp0zn pG^s岕 94WTWbP:h/aX zdpNTUGTUB n?Ra,v]~P@M6bf8zךb֟5~nmXffx50's*[ox%B{Nߋ `ٗ.oZO7%4mht1G*ۚ妮}g9L;:g{Usfa/cyU &rֆn8= bz}Z_!o#uR?trhkR+E )ӽ&|F >߀xnsC V7[:o+Zv-(uM&WtEɹɞg<Yqdm$ɃI9tVVaZi/Vj=Vڳf&<` s>膡,ڤڶ*=z*zCW:͖Iӣ^AG>; Zwei[}w8ik9^kٝZaV9!nōc [e3\M|r;T1&`\1y߫B|vgGbN@leAZUrנ0aevSTꃡ2Gi%$QHDm}m#ir5rg#XϜTEnHݍ^go2<=WcyS]CbX#&a!2\P7arSJ,g0h Jc.x\!-R(W}+6 `+H0rR"WHkr6 Q,4 H~i+:QF;Z7%-QF#64m^ZB;tL-# /-F:X%|FG?C PTٽ{R"kFxT(n9u6BSxPV0e,$Z6ZEq # 'F\%B P߃aM`?03J \2\!Rj媇rōU2$`H0r"WH+42UJX& H؈` q E>f\!o\\A[b,k̍FW` iR\)k5.@`֝ mW{ZQ*rFz2P\$Wlm0rP i]R(W=+K!  #n8CH+RšW#W43=#CBoBh ug%n 4ZnxDx%W4յQ2=S>mBЄP1b(T=ZOm"siy ֡~_?@)j[M|9T+~~xdz)>ϓjgd#yR^'J [d@YEx=\X@HVZ Ff(KaJSRpNep~D($g8!'x)طze^^ږz^^>O-ɤ՗j~Wn~p?NdVA`o[6L>-~w\R^XRVRɄkAHZysyG6_6x(\#VO)yn8<ĩ$Wɋp)vtFKJ\P 9 BAe47wBJţ\P$ɀ M8A$ J;HZ>$!-#` h 3,v83!/FnMnorWM]55o_Ɠ  ʍ[fW \;~59֡V =ZFP'zZsT~[M&-]:) n?@!/nps7Ie{V*%$*Rg<5xrG2̣&Ůƺ}sx+h)pVKU_ׂ㭺D_JʶlHQ銹>ZC?0P7yog=k(h]#=ǿ?ݺoDn}nQ$y,I>!I+ǤpLds&MYPex$YJUj$-h͔,qTeq2FW|?|[Npx/ȯW_I`<OW8JG3B,-UnEaTc"MR$g9J*AsiTfoT[TPSpTSB2Ή WA_]4YsVr0$W.;#5*Jơݕ}kP-J<(IA,؜YjsYV1WJQ VXC 3BV+ P?Mj-<cX*DA#̩+Sg2U:ej/-Tr֑B}TRfyʸfz8_!}۴]y8g"D<5wR͍ݶh1{x(yֹVN: kg 끒rWj9вbSwa׈fhO[(:TӨ%gC _{ODF0S""zinmS1Jt@=hÍ`WTE4P&]PafC Xj@cV91F<]U~b1*BڜYlxO3IU~s_ȵ&bTgQ T,Jv>Bnw|B9 A:^yr}4xNuuhaST{Hdt=§zs!&z7'ߓֈscO1Ǒ֑ ~F_o3dT&ϕcH`/DmT/ *Z}ƥ/-@͂|^ۃjkT 9>$]^B.聼0Q9 _&THdFG>`)`fR|WZ}^qDg()J2Tbw%,d./VA-#12q#Y?]4<ƺ2?K+5Rl{ XU>crI1 [(>s ލF@n7((ʡ7fvMs`q]b` U#\ "U+wIQ22)LHpm x'tH +QY[4 U TW7"j,8H&`V۞dWNc@!7CAdܡQ lEP C}HB€(!2] 4HAjTy>:LAZ få<a3@fih!\=+AtbE,N)(J892r,XT5@YRH( :j846Օ/H|,u9khAUDI)bugy f1J%)`:ABPj]g%3|b`3d5>! Db$ۃB6rZQ8b!z_4y^ȠΌ>0oxr R5xlPBr|\1fPTԃ,0F!NHeǀ9;L0Ջu1L9?.?ׂ[U0o3?8;5͈ G 1=*4*KPMƫ9D%@ۈقj2AVa1 % %tA\xO(z+R|$Q*LFȼb|b1;3]KctX%f0A 9ɐQgk#ԭHw 3xh USYߨFbΩN&dPJiy&AJvzMwxoEUv!ȕ'X1Xk3tdD4Hcc.u%6h /Z@mDBMuj WH% ep7P@R@"pc2ݏEx%\)䌊 Y)֎ a<Qy@1 bu5;X]Fp2zv huj 3`EVH#t5n(!/{úSi樇*tyF0C{s":؃NJdt `%.(H H,@z|!(!=G:O+8hpƾ:Bv+AZ964NB0C?"o Q0 bp\1\U -Nu#gu @*cKULmqLG[Pc@sກ6[[1syw jf=Եj@רM{&=Lb2Վ *WZ@׶{[<5801^>@|5@RMwUvAy "N֔70hlCiF3 _B\ RčtX? l,jd'}ӠlCi#8 8%oC)F @vE< \T*̓ .FR- ,1˱XTZI>MH 8yBeqf@JrwwW,"$R# R DŽ^qn/nZn 60=9HSL(btCI֕n-Wy^S)'2M%258-~4/yo>.VRn[]zl|q;姫WMz~`[\>ߎ/n7_g\]qg/^h+D «Z]׻gs>>mlSl͓4!Qoz|q6nz^m0~h $&M*N3}N6. d9)/ }Գ8N PDH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 tN R\҉k-q7qmGB8N 8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@*vIN DuHqEh'КtN f tN .u'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qd@]iAN b@_ђ=v'Z@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qt@mwf|z%o5Vow]{o׷WrŸ,ʸK2.ArhВ:zP:/ƥS0.nK +b p] ]1Z玝)BWCW r +7-HFky]jt8"yr𓫫&mCk~(w*+]id{v>M=V](O1{jgiqxҡrgΞ)wV7W۷63nftfLsӟnۡŦj|կy/1e{]g*c`~Y]]b_>{~ WzS?O_[2٣՟Eo6 ѨcGþ~a!+w䤭ۇ\̝x_Ow nnr#?73@PLRncbstM;$2ϘQviLTU+ioA)G܂0'#aFaئ,bŰqZ]0bdBW=]1脮N~AtŀrFZRjROEtg'bZL1hY&:rlZ[]+nC|]1euut=}DW 8M+GNWd*xhPba9nZ61J+KHW'bI/kwt(e$*ZN1pvh@w_ ]-C\?w7hh]2J/5|ӂOG~r'~p.WNW2h+~lt +6&,K+FG;OOW+!Z]1ܨR$tutE$dbzZ ]1ttS ] ] +'.gbGOW2$ ppf1tp}Z ]1䏝2jY:EJjy,]0rAR h}Qz3CWCo+[^yju\z&6<ѾPAWV곇G5+V;gsf[kE :>psa~&t>mP R xYx^!#KLcL2(L ˰$a{LjkM7pW0)tܽ}}ztc71_:O|sՙ}ym~?K߾?oJ2[-Sݾow1em/Wmi]}#}_~[6oۀ[p6-`'e~GﻏP{T~f׫_?n.7pg}0a >Ё,a̿#hCѵ|X W M~uwme7^yo ۋvcS$!%ˁ!)Rz,ʡH̰9==00!֕PD8ä8]R[9&UwCi GcBM܃ ?omTDɨh:sU{*c2gu"l<<2cqPkTMѠpz2֛0c5|!A32?$fH;NPj>_hh5ܚ_vU~ѯűkS| a24륝t֧U5_–-:'8b n6O=!GL"LKR&IbZ.a5-wbӅS6۠[Nyg ݢ'Sj5@ЭӱX8ldbonOڲO¾!{l |([nVHQ1¯F)H̒!)J4#(%XvB$KBpo;R@&Q)ML $NrT*g8P 8ml\L d:xlq \\Zk/P9lëƄ+OݛL' #.7jle5Fy!n>;YyW}TWmowF0"Ac %^%t_tS"L I Q'5jKeB1KQ\8VgE>LGfLZq-Gpk<+xÚs&.70V[M4a += e.^Ae`YAQvfaoOVWi#Շ9cF JQa\!m΃QyΧ%.5( G(';!ZqNy躲c?<3. m8m'ЫRG3B; Vyk%|en<ÎwEhVeRvu3]X:WHuE'ߑ-ʫP͎'Ce؄^,?k]U.˿д\]&3s]5m q8iTnmgM;yj ^g~N<ώ(nXs>Fi8xT-}69G/*:w!kcdkMBn鸩\Mj֝5FqѸ'܋ygˎ.fW5ndZlzɪ2O2FrJ8G`S'Uvig\o-Tߧ b9y1daϞ'9]X:aģhJh<=^- "*}٪zqECSz&FNL0Hw?>Oٛ_=;j=׿IaTLcA ɏ(2R%gw*-ITa$@&S <f\rcU^F51@m݁͜ ﳌ*QM}yښI&+;2MϪT y.;"vǸ\¶}HQyw%M^0%H'iʻ":xkplm#}ȨN1b$@  F꘬gA4) !*AHh; m#ckFJkXؚg uXXxX(m6̸ݤd/\ w~}ApPξrN0C7Z1O,CƂ1"G68ac[bEB=j2URNnx Yqa+JDl 161FL ڴd-sv#n; U J$͋+5AZc:j@IwI,SFS&Q[Äo$R*F@bLPH!N$CՀvmonؚ9y-t!d7ED2""}j2` 4W[HT,ZI< Um=jԸ*o5CK NmRKg| +E5*i&4w_2"fnD<=q,Osy-qqpWy<ŀ|Ak8# $MG/9HbR8}akޱ'x(xx ^VyV#?yWiݻ-)ɨXK?`ʷ:(I9X <-pxɂkAH6NxOIjW[(eodv/漯!3(똋JyKs}b#x#q4<_-^T J|_\P?Nf:iaPX G զIK<'31/7h 3:{>g TkJq!Q!@$H7V!._td2u@{A 7>3$hR+U R"V8 ftɔTU[C"FFl^zbZs&189SM"jA %'ȶ7Mh͜0=BNCR;NJd`qKbQ4gtR,ʯx‹ 1ĺ(Q`Y^'ÜJy`x_9”W!]&߯w Ut]ѥ.}y9B5?2Ǿh0yZͩy k=zgnc江dKA_]ϟ/1 Du(sb2tv?{WƑ OwL/=dC #x^BJ:PE4Ք$~氦z6ܔ'Ng72'}l- {S{쏝i_+Ig)O8w$sP\6Ӝ f##-]<21 B}'D3vLF: \1~\ϧ:kX*/X1Aݨ{Ob4&nX.,C,,|zzϾvF]"D@96\s>{|YO:JpK$A&!N2B UTB@BL&Prss;\,[ɟdiom$ٛ D|Vl~|mz`JoۉboXHN5ihB=rWa@[ܩ>9?JW2Nِ-&%Ȑ/]X3piM9p./S`PC `8k"g'2KR&Ɂg aERs Z"%WkdVgYl>Fi&w~Srh~~+}K%E4ʚ9rʦA8S@Rd"Zz9Ed.(hG. "Ѻ0jn5\{q`WEG=;sOiS&Ttɢ+tbɼt 񊡐*sn*^`9H+0qZȉd J' ɬg`AN1]J{/+Y5qԳV~ ԬQ{dncϠKf!m#P % Sd22zڍ{zrָQjew;_Bt!XjmmL9ZOz€$̣P@JYHBS9uC-~'wاqD#"|Ę!E_6 '-r܆ vs'Cie G1jk?_Am4+hniii7fcNJGm!n)5CQjKXӕ;@=8LiD&BCG2JσƄZ#uc4BSY;k .Ɗavֵ|x!ҴCV]H\>!a̍{8ŅՓMD^ߔEf4/iި٥> isQAD2 (?ؒhsp>ƚ?*c/zRʓ~@!&|!v dR*s 'e >fc Ԛ )u p@@ef}VT6`ȁ{f`E'd]^^Mxn Au;;oE?v9ZqN`g&8B>>Pζ|TA)vnjkT )Bc8~J^=AAz6E )-(PSB@RBaLIa8ԭ|=v&Ko!ydbq85ݤnsw| PXL;hEB$ baPG7[)KMXP'bQy(o޾ݽn8! U^=ۺ6ѿy! ^_uxL64*JDsd@ues.\ Q> %[&R/y,; BLj֡N{!2~Ib9WJuS)IUT6љ,ɽ BjeبDBBB2bf)#XBD=$/CwPoL[&?%KLRh\锪E[_s7R??O3_NƵ ޴%_! I iT $PA/$)N~"0W~MK rr`1 s`mڭmWh'! 'eMbO$ߴ@%Yl I^V즣5llS[gӋtM! N#(w{sko]Gf+򮑑,[њ.c OL:x.I[bͽ=;Kߙ/;t/w醐"mPܚ q:R)ZT=i,GW=rul>`V;}lz\np mM]G׿[y2ȇeݙG\xmoNGrv׏dfCoyo 3u֡0c"miU?u> 2%}6 !7L曘6 \x#VAct !Cnfp-Za%mJ.@F )= X[e쓶4XV;53L"5t>#wxF:Dawda9T0^ ,}rȭ8d;<##kin.{v?ko?VcK )JO4'^0L5fyNbF2xUqe$XI9}^:,͛g;^Cn@CM5|;邁N8ܱб I#gQ; f&,2e9B̲KkK)KWT=C} 3dVcɇhzPc,ǔc#M'0f'UgEiq}55;6+wHrNgJJÔCd$=2%41gq*LZu[g@E,=5%4)K%F[e!Zs,ڭ3ks`!5e8Kz^">#/=XJV[Mb\ ȍ 4ːDI5lҗPzM.KW`bA[]&IM [j]Ȝp6H1d ^GU0ŧKWcє2,L%HMiy@`ңa6#el1Qul@:6OXH }ݺ3Sޅ#N/49(4EwzRЦUr BnTr!4IkO€C'C |CQz(1$PT2|2z hjڨXYLs ѰC`$"Oْ @U !9HeVU ΑgCpI)$x9Z烗1ceγsFȮvj|a*K򊀛c\ zY(2v35- '%- ߁#"pI5Z9*)e1|10j $C=s/#߀qi咦xwem;tTJjn׳\n;4Y$(bV :'N^zɧpSE~v]r-v[SWXD>| |C\ LAtU0eDYDi&gW@BJm_M7,{cvxu;A4)ĥEYŏdcjO$v;ƍc^N.4KjRSr7c AYYEj/ɗ? )a%Q/e'% n:/l~a24Y^0cl󳰼/fjby_waڭCQ~Ս~dWCB*tbfI:fӋzNV* "7Cq{ʌkE-a\@%!Re԰5t1ˈ:VNBUA_WT8kS(k=jzci`GPHޓ(Y[Yk=D\T.#w}^osL{cmPPHG [|rCfxEa|fڕ&*Zi L1}Y -j!^.mI]Y +PӈVR##jkUYVJP;lS#iut=DcQySweoYl~mQ[Úfq(6ZCܺ}]0(ړak;u)P6#X~dZʑELYD 1Y#B {\1.b*rEUrŔFf\9-u^r%#W."ZdrŔg\y uWrŸ8nwGrŴV.WL>j+&BHf0ȴ.WLi Ջ+p,* Ar[ t#Wa*2,W6=`k9yJ+Jh hv95T3x ť#zdgL!3?UJJݧn)E{@q3?h*wh8|]ޯ:LF eJ2̧tR?*-«=٣v)qgn)HR I |]X/WWreY/_?*}o^mTt+sjpٜۘs7IB  0p֗w2Litz{*p>)~"`\1Ti11:Q媏rZ$$W 6"\)]*rŴS:媇rƨV@U2rEYaZ\1esȕ1ޡLHHF7bZkc+9\Y0 [\5:"Z҉,W=+g7)  U:sWNtŴ.WLv>ʕeJѕ(D:sW+}*rŴc+l?rN) t620.ThSZrblhzs,]ۮp]hQt]QL\,W6=ٚ/7J ;ܱS6o¶Nob3jlӰ']eL 9>PUy{1R&r^y5)J٤VVV=ykY=p, mhLXGp[T,>XpL0X)M +/-@Br\.vu$o%`ʕ䊀:\1-(1 ,W+^ٔ䊀u a]g E\%媇rePz+Fd+q湫>ʕE%$W,e:AL33R+,W+Nt\9&k+ƅdډV+D媇rBtq.'Fk\rŔ.O 1waU.qBzUelsW6@lgM.A&9BA`&!ln)hLiiNH WJEC$(c;D2AdZJ+'f4ԜAc9 s:XNjs-RRs J*q9] JǴ]Qv?NHh\1:\1%j劂q["`T2b\Hfi.WL媗rE \-dqME_ `֤cȕډبdqOEG/WD>5UiEJIF&ςi}SDp}+јvζ0O3Yz(WJy\\1Kfi:'\r +?5 w]:q*g( ȊurYmzƜ%ҞjC67jvcuL%&SmmS5ZZlyz;,sBw P0% 3n a>`)c` tA%#Wa]U<D\PQ}BrE-aS+E]Rk媇rƨ*kAwMo%WLթqa6QAAT0t+"bZ`)]>ʕ䊀d+E\e;/UY!mr "vUDWLUFzel\DN䊁OF'2H^F?dJ72QЩ2ÀJB \DWL]"S<|9r74?J`0XϪÅ*֚n*RΪr\=hm]|9S?ZxŻZLmnƉ gj&aj30i*g$X#Ⱥ39*<"H66 z׫T^czR0U^Ȅ䊀XVrŴF.WLi\Q*0`/+¥G*rŴF.WLi1U PZLILFThpS,W}+F*\103wEJ&]1S`G䊀Jg0ȸT;)cx rv86ד õ\1-F?wE,W+F6200dpATiu@#WXad0\Z/TG&QBd W٦G'[;%(3ĩ2^ [ly@2!f`T4!Ri4S*e2%ZXO 74i>S>gD^v)R7ݾRX˶fMD J*!0qN%(dZL۾$((4 k4LFad:\mBrEӂ]Қ,W=+:!bu&bZ^҉GH҂&tW \16S湫^ʕS|~VBrGw^O* Ti1D)eGZLi0H֦3wŸ]Dd+L|j#(DWy-0LirtbJLűVWmsWە0Z%@JF-6ŵYezBXsd@S-ZZruLZ"c)d+)1T>h\#МdsWo'DG FZ%2LdA/Sz  \1b2r7D} Yz(W)M)t+5L)239+֪]Y$$W u=0\@Z S媇re^&$W \1)[2g\Yc ܮq1hRSjrCr[NF HEVF]1Y(WֶOK@2vUL3~)]j\ym9ɅM@ƺw_&/o ^Wΰ٫k2.nf9Gjɽ =.L 8t λ=ob}~Þ.Ru=+q2x]V3"( &<~xʫzz9M,ַf3vUUy\]Ϸw@t:oߜyT˛y1Vj|V}W N}/wECV=^ۼ7W2w86;z0Z͋+fFkꃯc ުoVKٺf5X.^T-7__e;z>9gdB3 ,*ݧh;r^c-[Щ-|+:BIE}r:Ythgա}o\5/',F2^=aoRGT+%h쫟L˧7Yo1GOsHhdAޓH{ǖrw-wmv>n{>0wg*4w׭=~='/k66hGj]=5ݐgףrtb9[r۱h7,vZ(wSOHnh"y,}1Qc+v;RG[.˓3OV ^mm]0# kl{8C@HJ˰2zR[§z>5պruXK`JJ{= 68@>ؓn[Z ح<+Ĵ(\{oPuk1{6}b׻w~.SMl7_ĿPMOqœ 岾n~Z7B gd#T6V~ClzQD_Ā›!= h B|4~ oC ǢZ (XƅxT)eǻ3[+qM#RqbhXQ ~ a2ƣbzRWqkѺil䄴/pA>rKbؚEUY&*{{͋QYj#FvBJ-A)-~#n7yLOLM'%/Fgqwy+@W*VV (?\c9Z˰wVT> ?R^>Hk^)g5:|8!ED݃|xKiJ3ng??.X; 햇R7G-}wiWCҥ+_ޔRe-Ǖ +/ǖ錔X =.^}zkع +YΎ #喲M[ܻjS5ixtk^Փ,AQY6~;yMyzs1|pVϗ< j.;]3xАG?q|8aG4ɺE H [wjtCկ/:oّA?{FY0x Ip9g;3rf5#Y$'NK-uIl0*օ_>߹m1 έXL~MVV魯 !3n8k%s&>i BZƋ=#3Xz}NAu{6kkl4?q eU r 1Y;P% yX"e0"8#jSr!ɔH7w6\lX珲N0m,Cs@i-eZ@ 7#\e}xSa6spj DOW_&˳dyi3MOh^gFל.ؗ?hzoe;]^ñ3'lx*˨,k T^ٿ`}qOUTAzU1Ck=Fl.IY21Jy(tLnP0Xc1kJCZ~cV驐Et80OzcL Ҭ!WG:̪{4K!D9Jv@ᒿc2G78ٮ/.8IK+^[(C[|s~9Y\/ ˋM 7/7"QA8(R"帺Ja "͓c6f?|h#F>Ɔ&K㿫6<Z,1'(UNdA%$$fk"U…L Fi. lhAŨ 1 ATG/ۦcE&UR8NyDܮc5m ڎh<[Ala^;gwc`NAmy'qUOd=;=+u-Lr3w=;9qhHzd30v(]1& f ңdY=[ݹkuǝ JƬA|3f8`lp5H8}Ǭ-Ĝclߐi Z:/NigVN"vr&md݈0fyhűcap`tdMdL3z&HӒ:_TI|xrY>-%BRR/ 8bt_CBBl<~G!5Ÿp=]1d4@ :e$ 5W f KbVϦiǁmo5HȔǓƬQ\w:-ܳ?/Y$;8LU5.6^Vh3!$7 12eϳj@5z4 㞪i|0Qeثy 0C_i9xǁhp1PX?.]7+jヨV1^#X80gd 7%Yz 5E~o>NS }&:aH2 v@X|ZbOwK5~]? 4xO߿[9Ag9oqo7Đ6ysN my_ߞ^,W"tjlaSi\mwk=D(mBoluRz~%Ye?sJSܕxvynjP0Rkj:Zme/o_~oХKF9On&sC bZq"Bs)S!Z %3M=ՔVc^wy]橼X{ 8<`}0OdAARrYiq~ U9 O|12"˫0ƨu,=MWj*He50 S#2P 2d q0CSM_CĬSy][bt051C =7!zjlc b<7 '(OI6̪<5AĬSy]5.6fl0!DcLGg(i殯 Yǥ!ζ^L^Vdƀ|v6Oo=:8U*[Qsu~c$%Cab\BQ,Ǵ%OP>2Y9Eفv+g1XvAq cvh[]|&v']]ĵc5ZWYTNk0[O]x%)>l-&c¹su[$5U^˻wOϓ%2߹6oHi*dO9ڭ'0p4wvh7&[\g76 D#r:R3&>Ox l,^2jcۥ:'8ՠv_cz0niaFqMfk疶H6*]ĭ3=q%f\\ԥȔ.>b1ke\1w@;l bA(.dmhZ-w}"U:WQ:&n@X:uaf ' . ) =L8PDtb)HFX$ 2%*ǫoQm4`5 jAZOKt~)wfݮ 'KoȽ89%ϢQ+;=ʁV@㾹 [7CdAvM&2#_rMgkZ[F -%,W>6Ɩ1 Yۆlr[@Jݵ$ x 1og8\F0|h݅mN!#2(dm8N KQö}ԶR5-d\1%n*d6{_"EY %v^6L_I_{Jpjnc_ !x*\D#(p O#єi*#cyX!(ɻJFW i F!jWJ,m$8] A߱kt1P.GP:n$}1'FZʋœ' )%yVp=W% 3<,-t`F8R~4+}cփZ̺wG<մh6F.@Y7*t$:)f8Y5hNm^߮/,N?=n1,q{["LHI-ys0%% WrDu Jѵ̿@ EiWEM]^_GiSmӢ FdhI *1oF @Yϣ.speT1ֵ5jJʖ&mMԵ )y#l4QRf)RҠj{λcu^OOr\duyAҜY\)RIT3h4Gev渺<~PX]SgZs"vb`er,}JPF%fR]ŗa`(-NMqRtky_WR{9/M:kY0%7$Y03K FAPE"5)u)k=}UVMl>o8'JJ$U=w8Ib 9(QoqD -SY@|^yPz7TRn Rɥ%* Rf\`<5y݅TKʭ &w-zZ܎KC0pK?wt1^F߉zE ۜWHqUv6(3SKDÂ!ѥ|t{I'2\ϒj2"[JG}ڨn9nbrw;촐. JYBPQЌS15.EğV#CնqT{  9yu򗥗myfF]sԏ}ijֽm])MA^C0q"u#FD}a1X4-Q̽Gc[er/(Ўo^%no6xp/<JюkL >OOfn?]gblLgWߑwFU^ܵK~;oTO'$t+<>EAPx΂ے wxGF]f"Pػu,W 8o/؍fq0)[dgC9N"/LQe7T9x+Z=~A-駂])h5ykd =:ݏRDPkBi^@j3Lk'.BL Q]L%>Ɉѩ|l3}~!xg?1dZE3:Z"]--YZ/ɐm=)U׶  E=l#}ks?#}!:^=XSG!m[k:m]n Y.J >/A;[wHDJ|;k'RYoހz=uApI%ɉv%j<{v $45V(ѻy BS;A,(ڄ@dHIJ'#)]QûC[H u YR$>ף7ɼ?y7J3 F.xzmKc^iF^@eI R]u̅&+ XiôԨ\9A0jA,Fڛ ˗$M7P9ONgA8(-cpΒ_q*??t=#9Bd¶bm-&Gj:; qXߎ'N{< -m=$^Ѕsbe #XVXqL^5a+Rq΅H9wh:OAF}!KQnKi2iڼT_\Ԣ iLxy.!JPMIW\ie.HǠ1pۛs:uMjkɍ8U%xVVJ`7٧/ڂ՟Ge'\oq'uNhETb;@B2q\.S4ypKǩK[΁r~*[DZxЛ 鲳[5_]]b)"*&(WyJq9h#Y; $O/ϐ.oJ3J% m9fm5NPsHBRs8Ӳyd{kj @{ %84 s(ϹA<%R[!CZjQ !rՙA0I5 (%48+PN#M V_I5ʠ <aG) /ּDЈNCʡQ!W~Ƙmfb NY>4Z?y@bjrSAs8U+@$D7 sAAU<{VЪ148%fg:8Ex<G$Ce8S:}*N\ ";7JSoYF)'H43^W6\bǹJI r XbGFzAAYB.NXbhI?CpߕiH5&9Ͻ0P ku rE6ܽ"7m w)u(k׉Wš:-+S}θRRT#:W $M#As֟Sx1 KOi)ՁzJB\ca_[+h543$w<ہgi(d=.BZ.-iwa`znH!u<.̭ .Vt\:cG|1G_yHjD*A6a s D w~B0Uq3&Gn m6v9CQ#&3%T]op:qS1D%D;¢':^cP)ёFǪA1ӮFyaí7[M# {n5.m5_ceoaXtTG+,ќ;#BNfJ-vF4-ro`&YYbQA8{B'v´zcå p h?uS˵q̾>\EgWUsu6~ǴAŃ1훊--dcX(jvȠt:oRE2zSn&+񨘛#>YaKa4J.1d>r;k52iPexp 70 jH!!Dڽrj'V522M !=FbQ6V"%HUp㿳տ/~Gg,=c-lHB1 F=_g브씗rb1yghQJi #9 :Ir/[hCEchl (3|Q3y !52muw{={6[Xa[rSG)Nin~4Jv]hWRHM$6V'46)H ˄hƄ| _z#nUrކ7uggT[(ܠf%Hk,PZ 0guX˱[ZTG8~NA~M"ߜh k\N\੝40's=g!)¨03ο52_ 媟 z<#9:@AlYq!tꂘBEd  qq ҼB}դl= SRT5^9QA:)dSȩɽ\2 .g5(_Ҍ Jq45Vu\tR9YınI\oG[-hI}ޛ'xojd"$JwcFsÒbQrl4j&D8"\ohSn(OjdsWu/p3=J~AǕiz *\1:6i3A&B7/_˾5:XEBeݍݕ̘tt=rRf$l$Ye(+ya^\wI'-}Yz) AtuBnT /y"꧎~q?RsQǃ?.V5̋SvIS:4s50 <MrԠqAJ)/g(0Ҙ֫vgá0yY6Q(@e?c@aE柝|\K۰I6 |dGq\e gXoqryXeF.I2.&ٲ=$2M W<.(ܣ]aYɅ˲#r`[ HrK+S$}2(Q1{uo( PGn^s e,p#˶E2KK/WD5\#s2g:ЯBv0 (]g Aq:2?SѠ4rPjd ڣVav&oWFBwwlfh2|/Kx6(?09JtI28IPwuvjzY^B mM) &KKnVo"9OX,eϕ9 nD%TYcU۬1W(M&7 5tg5åpCdȸEL3ڗ8n /jѣ\S·qu/t|4H߈np_(8CT>2H2[4v=Fg b!:f];6U՗!g(m@z.^C_,tpq^]w' RHA.cɇpjlbU7vrjBB4amu2ɽ6Mcmut FnQͺ*x*QS'E3*޶[OK?𶨇3B8( vn=3>zff]=4PNngZ&BTٻ&L{YXJdM!}$7ά=-:@G@ǿANø:pI0w_ L=t>~fU&Ug@aR2p]vѫa*=ځT*ݶvW+1*W27Sy"ZYNEw a׀tvh XAU0N(5&u$J1޺҆u|o~q.5#0z\5[ULjBY.I1ݥL\J+ʯOmEQ) e̲ Y~T}__,&/ -_sٹu oyj㤳/J2YM?jޤ[iGsMvY!i"e=~J52nʿJpڼOՀsJpT|M 㰯x( ɍ+gąX{C/n,W{6UpJ#mѽwE$Po׸LaUZD*=Wu<չ _Q187 ( B`@@׼*U oךFK)q*^ ړ @Z| Bt~r{Zw0noH>;p`v nPf3e,AZcR!$al2PXym1G1J04骫Dj' ΢R6cN:pMDad66ќ;#BNf*@N[_eA#P+:-#fl*^S C'kr!ÂrgH?J1)?8c Aps֣v [DÃR Vmsܳ τB\vpbscG"0 )1A2P 2 QQJ*.I F f!z$ !Q zތ9 /eJJ1(.T !5?c:hőr,cT'a؈#B9h,To)4ҝE2KK5˓E39N n~#q:"tFHW)":KsD0C) q}>+G(˧HKQ*@{J e8X-5yTC̃H!-9J"}e\4|UI(k+e'r~)8yn{#M`JɻSsB+}0ݹHJs7iWݤ`P?鼙.Pr0~ eFy21t[e:7奈6( wm#\JrtSGϓTjrs̝H5WIY\Po 4$OaBĭ_wܴ&'\ƜwO.IsӘj9otƢu}?ViEn>-@; -Vhi,^[7ju4y_F+(:o""w0 ,| DyQf."km@?}7 ֵ,VްIۻao,Pdr_](؝ ^/\ 3 #WHӿ_b.2oǰbθeY,JxE;.tlth#w#ݓ;vգ#۫ÊZW iod.!U\-p?nl2zwD ‰Ljq%#D0N9&p 1%EǗDNql; Ć;DєҒHzaT?\}ݠԈ^g.mH~| GjGKj|?h gU*Vl)Ldg08Bw5ZEɀqͻW˅#k 4әG[a{Y Y, 0Eo6-C[hh67zI4Ͼ!< ΚNzO}y3 Ar_ttbz\ϲ_ ŒoY6σ;mQ`I1O\񷥗GmA81Ø*ez#m|B y4㚬,T+!i^E` ۳4'dg<-e e0{JF.ܟGU04"ײ׾>k+Jh)Kaf>K聫V.x =:@UM^F8Mw"OyjPj0U P noH0nɊ0Ql 3D\L* <K&9bI600`΄L<&7l<( BP[{:nt+PC_ Ƽ)E`jU8=_?!Dҁ$`̸R$2_]N"f'9m]qU=X *;@nRzEKoVZ ye "8Qӣؿ pڪF9pC(Qݽ-ʭ8Qn78jLJC Lو1ε:ʽyjH1t&i1Q\:]UWJ,rsLT7Ҟ7oI@4>G1 \fLH'Ȳ,CNP.29X\1[pےLSec(vM9BIUiv^B$T`,"'U'(n}&;&hKnu&3(%҂:/.hu;_T;?DW_&h<6h ,1ulKuĴA|K#CysSWP]H0Q8:E@aqh)/Si x-KT? Rj׿oǘʪY5+HG+uZ g|۸{ nѴǙx /лKлj@o7鏐t9JxI]1QfB;Q©l.dAHs-12ݟ'W[z3x^,ϳlv?q́g`~i$:[ͮ ЇϣW~AiILV~ϑP^ܟ}U$&oM/־\T}NcKjmIaB MK[QB#)X0<܎`<ْW\snMuKN6 BP`ϱ^HqLQYn7ϼє!̤Nܝ,@ qiMSQHq^2?Vpj1N;v&©4\R>__]f9"*CրP2psdGkR=  | e/EQ;FCQj:Pb(BiJD ڝP+ ij Y ep$=^͛KRj`) (yeC LjF g(b, n|Yj1W0*:^W`ZW&W{9>m_A=PjgBr;lP^Ƶw eP,`Sh4ﯿOzˡ8}~=iEԕ| /$Gp4aLs W0jժK ]xuբXM_tmzkt).#Ch:.Q!KTJU_OaYQϿTw>7SNr |K6»{2T'ye?d 'o6{xkj`R$V&7LJv"#>y_7),#Z+3j)R ;η/sT*RZ-c Nx-Q* N&cX<>[xB ~tJT^Èp;&nt{;bpg!4/ܽ~goFɗ¶>[ Cn^qrw@&Q6:8\*S*7f =R* We~U ati|V:xlwDt ]6S-\򀠱(ď9\@GMC%=w4zjIYǐQz܁ٷ&p'S'^> N'hM'wrz@~܉TG&կ{t'b}0>WكTL1 MJLLi،"vn-A҅J< U5ÞjwE;ATT`1 &#q&-Y۪J4+`ⳐhmݴGt2lG1(~8Fy6) 4N=\`.#9drJ-\X[}DCb@H{dmAՆbWȱ*J[fX"4.XH aɳR켒`إm4[ms rs)$ ïwk)JI`:28^ο3#za-rH%ϤAf؂>N$@)g.9+)&ex#=QV2$χG]3rNVP}~ʿ.W5GQ5k[c.XgԶ}bVyXNw ymE,l4-VM069lLQ^~&0ѮX,FIY` cEAwoviՏBNpFJ=zJ-rAM^i>y@!3+.'+FC}J-f޻eo~pӲu0D FQ{&]='؊|W}Oo'ic9+x!,jBk`UsQz@G?õUn_zQ~U(_5} cz$>BJclMDaf"jx<>SRY{2ytlTR"(+%MS4>5? -H{=dl~*z :I|u8W]܅#8qѤvy3xlH>ƥy  RVz*X)f N?6Jiܵ@2Z8+D.[ Jm?:Tb9'nݱMsr"c^bUa9Aϣ+كml5D/S`5j\op?L_8![B41$oKôXcm$Hh5p=M8qJ=#'G4@B7IiTJʉbߛdvz;ya4Y$9}v?[f8SYTi͢$eFhg*}MB5Cs"eēHC.y+N}Eu&IqOr Eպ<!Ob }NڣlC!F]ߥ7(ڤKb+8a";B)h侏aQg{|}zӹUr諍ۉEa(YL0J$PK)| =uEFbIFWl\QVR\BnQ PS[;hJT( B6ZMP (kiwbeF|Bѷ7构*t#qߟp97+& mk"}ZCn@xj L8^<!q_jjť$/O({' 2wj_)vGA` >(of(Uuzyu@xO0?J7T8BJJ{tBy9gAhjɕB! ǹ@P8n;I{IJF %<t_dPkVAb ޺QX#*UѾ|$7;i?8u^bS]7"(eG,*L]7J|RSQP_ A Ŕ:FT*'9 !S] e7H{لH!z*d+͈JENt5ײkA`' 8"І@)j(3tp:VѪRUH2S$>VՌae}ߏY'ǫGo׊LLƂ'[wwO r2}B+4)eN>JYnTЮe2u4D;C&bPwm9 @iNɆC;Ob" o(y=ãRg,gtD-;<fªhf<-,KVhDDH\+4~A'Mt9\3B~ʿ.W5GѯtvUj[cU] ז {}XN ڼ6"J9 fۦ CoX|v6#My]Dy)tDcT=$Z|2Mh9:3޼؇I5 6KBE AvA 7>8;d^xb2;g_#Ek`izk -;]g !TX`·%5(X]0jr:soF%r.X_ fpM֗R#„keŞ7vH_.rr ~q0>Eʎ s9)(8لUk.ռAiɀJD\澆5B\i_BdS"XS+RVDh t@#$>*h/VOkqp UFxJ'cXd%>>D/=ӯ̼>YV^‚xX[2A=XrB ~<4q6Ы\"bjxd ZUW, F*][oG'ҁ!WmLRhWL34O4pQNjOA`,'L|k ȸ>N Lj0?  OC,rX_Wmh8 . G7e_*! k\VBo=GWr?nE 0]pq{xՊaVhÊjppo=DBO Y(^?%ߓS!zj+-G=3GԷ=KE5`kxxmD 'gAexӯ9Wy*e@cNmTJdVi.-4D0X7<&Mذ>Œy򥁹) uM!u5ķ 8ަf "EU .jD1N(PĄʉĕzM  2.LvCgmCQ+Iw=Z<Ξm>E)W7EKqZu|>c~ M/bHUi%өdi.KCl4eB . &]j:#Ihg(,#ir&AhޕbbT(u%#+]D~όĵ!D:d!BFh\KqYШ 1O*BRx,)-ń*W2#9QfR( 琡 j@.@,p,fM<V!2Yᖤ9%Y 6o?D%-o?g$TKFLQ,rYR`B/t9Q:#gDD4'TS"((ۯmGn:8#x©a%.1f%]yfrdH^,NSف($ ߖ_Ea49`R!38M0Ap #ƊR'9*ſˏ B @x[Ƹ@$eHBim`7d(&)2hg-hiU|ָW6C{r[KOi*dW9|lg=p>a"ߎ}6Y$b"5/x"-}e.Sk6EX`atZ$0pPSDX0ֽNK-8ges#0b#0j: m@bcn4pNNsnJ4FdS+@Y@X#ä$)qs-EYچI$'4 -UZ)UWC0W+vuY*⏟|&iR,{8X4qMHf9ܘ1 !btM8)@&A~ h&pЬoȂVB oJeg1fU"O4`=*g]]oT+#.WF\^t߮B&r If: ȖfDnvZvk^!>/+Z6VuY HLDB `-r`IEy++͞S0Ԕd h!m;xޘB0nF4b,4N GWFY|,KDcY\2zoS 9-w \S c[u1= C5tDmT^Ģ)XxFb15H^&cBo|fژa?;{6Ͼof;K~@:ܹXhOS]to.t{<~+4?{dFߝ~8ܤӳ_5_{ft0v^O:^Ͽ}ߥsNߨ-=y.}Qs|~εɝ&u'O }gtٳg :I ׋xw~%RZ8OojyMg0\{5 |CSI;-t1D_A3&KN6ya"ECX?⏿}$[U82CK/p{=1At}d>Bt&ޠJ wڗm(0}.0@A1a,l= sKM<_[kBCF}+wzx>gZo÷bOi}}Zs #`Ud{W]9u?r1HJp%:{CO%0LEU8\08;޽ngf4L^S2$-1(C5a'FS/ô%] ; nƙef>0T:˾ #&0bm3[O/>~HZI-A,y-l2 2؅ǁVNcFD M=5}ZXcI͇bP `1ziyla$]é#\LNzg[섢KՖlh: 1"19g챣Sdt(67-U,L ݧq-a*6 PIRi)3PQ؉h$FcL6Xc4ZFNi <X18^lP =R g,pM9wc#j(0oz<ƀ%[>?Ȗum[>|O3X|-x^|~-@)Lu iev?D&Vfw3 Mxǭʝ٭FbY7`v2[YLjAjBR}yVjP;$Wjw3)IPxG✮yHzvw7⹛RY7bxBu0}WjcP;fqݏ sP#x #x+3Yk16g;~)q~rY?4paN0ej(%_pzܿ^wq둽R\׈ڨKEM{;6̴?,?9| G*U&pՂ8D|NdBWވ[ȺgZ-Ųc8֓%Ojde?,KSe`-+ 0leWsDZ63@t:󁈧k9VM*GFdZiP-W 1gGQBXFg@r)qg_ȼj?:1J$P٣~?pTئ. =5\]B%#EM0 B'Hvant]bn\Kq;^ҔyKo;ވqa"'-2}|h51&vg.f$pho./"c5a9O.rNM3Inڇ}њJ$ =ZҨ)ut,A(+70*ՇL\W:J-nڧV833㼄FrfڸR 4gHWtEFnݡO*yPKt"D!b橚Pfco] P!1ih o&9|̕`H%}flڻEZoWw>9 ?̜0%З gaCx 8.X_ %6*c@[.hmpvs,7H0o9%@u8\U:Yyyf54)L^fXi~^krzVjtIb!BFF2~=S aY Hϻzx1CKǷmX5{i_HZN'C!eW<!iuK kn vvZ̜ʝz{ٹDcΙ)DBE=`CMW86 Pi 7J4ZXSL}`/7=(I/{عDc-*X ::^u{zA#Z:|%-o^^9mq%T xUĢ:ѱKF6Zf՜4Bp@Tnyai90 0*)C־5Dž9aG8ֶ7qg~L <]K@dżƻ{`A"Ŧ<Žu)(Z4t'$,_HJ:IxDD3s4-dc]da >(-Txb۩)侰ȑ=yA%6٣p沆B6Fn!:8";eK^0t[ΉA=x+qЙO?~o[&q(Q~^w?_mSyڜ!/Bu/G?YY*_ؗ=uEqt1 /pVWbVFՈjSfmm_޼Qm3­\u凥|Nj =}:?럹Hc%=3-i߾cwڰqK%~zg. #Cs`S/n4Amj aJORrv9\onUT1*mSK2viJ++.gN(*aؤU>#kMԐ˥ò1%{kZ8Q[f82"Vkt(&)Y!08O/u( Ge _L]y\ y<"3;b[ě(7E5!z{"4׼e*wtH2DI 0) Mœ7zkUhi [z{oNLXŀlXMT*t=膿N= qHq]Ur ػh8$Hs5nk~ՠB?gt]6c}jl3a?$yҰMs U_WYzrr(YM8 ?첥H smW3 zS<5DjZ* OQM3u.$cpWx4swǝQs66eC$ԮV(v踙Q6j7%2pnV٫BYҤdlIUXrAP56~&Of> D5k"lٚK*F"[Iٔ;=C7pԲA hPeF*ޠOm4>qBP^&Ϋ__Ve |W:Y{ɫ'|oqT`cO-@\~S<g~; ,l y˿I:L9ϸ{Mh+:VPAQ&r *-r7.p_@ =6QGR%/׿UIՑKtO-%L*?ٳ"gS_$~aF\gZ7&IňEò1+Gs6$Uч=VZiT@U0d#P A'q/"gNreuQ{H/ܥB&fJ`rjHq4"F͒pAeuU✳KI&RvSDX z&Q9N?b;֖2[~k0ʕRT՚rʣ.Y犫YU4TqV_%I>t2Iy'wR sdkV!i*̠B>cMbctB]xJRtue`PP`ҕ903P`=3gr36`wR<9kN'g"ע֖]o-ͧb4G$qWą`oI86kE-ODdDO'p*y|ƢD}ˋ#,l˒DQ0K7?MX CuWF6\ew>GU9yK-+ n~H5Og.Jb9ƻaQ+T:?9c[޹\}`(/~~yt 6<T^[*y8*u2oxHT ,ŀo1Hk>ǔ+wOΎ1G1xE`A^% yr a.kW#/茴0)f?c9b0HW Xn)km5#;{RExvqD>e4hO~ܿe_i'>'zlRl+^>y*OyWORs:D6J<6mR躳a3-OFqЁYJG6P_]7JE3@Y_roͦDpF;v\$DwŜ(#?Φlmo~l}n#;L8Vlaɸ~I>m?$3nG<Ƅ8|G<|m#!S:bp[ry!5BM Qt"f“[Z{ӐC7/?_a1{mjȣa1jrh׈x4]3%8żsn ͠f>ΠE{@ #3]fP:ҭˎ:(N<~WhE汾 JgT[+֗F_`J}bt 5_\y$c90'$ !>l$ BÉ-0ÉNlN>T P[O3*88ԩ9۵\WG (/=RgI0Q!GYkKBp#VׄL46Z/3=z.~KҐ td}~nbvƗ3P) }pj'O>}Z? 2#¦Xi$zf/[\y<@Sw17L?]Dp-o۵@GEv?jM=?jG}dG5x&7.=+֛ uj57 H=ؒIҪn]JKᰆ,>,3&-ڔRZ9 GdXj û؎ ꨉ-;5DSy98e֑G,+2}$2nŭo'x'Y}!EtK"tzBnI%UǤ1 >͆*dLyArxooxpYє6 <{믆rtfM us.lڤQ[V FfA%o; -ئشj:*k]RGIe !o6A՘7++vKӦ b!-55`l 9%mJRM03:ǪH-瞽[]\NKûQWv{YPMcgk0;FI%BC(' rEEQ+{p>֔k !67֮c|e̎emﱌWCe<@[-cr+_waGP62f?j k>CZ8\,yCYbBT=z'P3:RGgF,H@"Ƭ_rjQE{&"n4*m[UEt& X"TԚP+6{׶ƭ__'!_Ǯ6NV4y :,#GX#O_f@3"` v#!r柯vkdN8f5rtǚ;1\BtLLQF݈Շ)(?ywfXlVcs5 Ssi76 zk#ل$H̆ :Wz@+htÌWΚen5k訦 ǭ sg0l |˿nH-޼~ݫ0 5BF7m /?f<{[#~w?8,V{,  [ν8 VÈY311oTj//OSXkOS6C.`0%jW~* c6>2J%l_oSEWݢPF@JӇ/Z$Ʊ*7d@YQ*qpe>'%OhO2Du29Pcr(K1LXڥ6)qh%K2V.8J%KlN8]ŶkOCyh 4'IAu%7*C؝GƬDV3 .ڀc{ mEqY¸?qze;ȎyN vvڀ<ڸN&o;-"J`dw1W@4UnTEǟδJ//%p=~OJSD =IAdkăNx>R8&zdBn.fZ3s/j]O":<2.}g }Sgo}'bfN1krZ{shRnyV3Ĵ)!ԕ vu2mGљ6wUK.c3B|>EeD /_"bptJ !NfùG 6aPoh۝͜ŔzkqB4u4AYkOޤfS:6̮}#]iɥ-]\<+;& Wvvh0 3oD_kVWvf%Է?vUS`6ab2>qp;&3v݅69u6^4qQo jn!P^"qϞF f;!չ9x|}ɍg$ZʹMNg6yYtgڎ5%?"d)Ch$-?hR/Idswd9W]H/F _91{M7:T;o&rhm٠h<{"+b&uj4HzݪT HrĘ02?`q5b2j̪i MRzxR#r>}F;={ '"*0Dr!>m oxdB93(B\lךY,3@}_T7en5xU,f]TfoDa|2LUWae?W6m(GC3#tt CΖ]<6(O3ySnEv-#|h~{ 36'^ݣ9g&.Nm El-bP pO7~ǼF4x]w7@qJ*\a 3-F6r+2riɯ"uVFgܻ9S`"T2uA)Aז^^,juĵIP|%{τr[}& Ao|l9@]W7g4 z.s=[jt/aRŇ6σ+{ydO4}dQs+;+Mr )?;fL**#a펰o6 ;]{>"Fb)uEw8 ɷ () )CՎN>%-+BkIEsqy/%bg% 4nH!^b Q2-ی%aBڼr"8{[QJ߄E2:lq@{=M9lؓ<f2%-t0d/\i91B35vYA ߶;[DŽZM:=u;.2"~㴈DY{)5S%qˎ!BsqfbKLT碒mPbl+#KpRr/Π]5]r> l;ڨ?M\Bd_޾P$D؇FS@iĵ'F*&SW uI|6\9E OB6 {:-م8{3EgaLv Em?Lfq4 ئPFm7̘* =p@ D59)U#9Dis.od; Zc(+bxLNt͐ 3{6TgUGخ[ٜ_b{v?oQХk0H͐Q03W dd9pǫح gT*6: hi}T`1p%c]FFk=CS0itwRڴ,qsN~<"{uy^A fa&B Iǽ l=kS~0滏W_ϔ?\W] OBcX |6%QM-Mɸ-xJpPBF}M0.8J#Mʈf#^| I腾GuGAsWik?k5ueXw'ř9/JX>3,ÈMoSxZ h[Qqyӳ7i^iG5,ǐs a=եh[ؚgG,0"dbHش^,c6Nߧ!Hv˦]9dRKuS!+ [V '[jlW%e@%3Uf\a4%]CZJQ" 8n!;h=@%kn K aLd< x}㤅b)"U1̒9KSMF8&=Doƴ?ttc^M$Js&Ili4&}DF}x:1^ۿ,wo=b+WJX\F>s>VyؠF3;ȡs[KU2_uIn~MֽI+̝D6Қ&ڷK͹9b*x?mpYƗ$#~j݈#,w<`Jw|8g$W4(gU=4 qXKl5R$S`1FK+x@whk=Hls5w-58P3,~䭑FJ)rJI` )ep^ڼ8&j+|Wh\)Whwy}f:jw4jТyd`jʅƫf $ЄuԮvkA S;#ggu-aĔgkl&);CKRs>h|9BbH,7A9SEVY JH*z"D#5օ#BqUq5.gN1`&EDz+8"X|dg &e0:`VͰҩϺrFFiŅnjR)w\N-Aߖo@Ay ??7eRW /30D,c* *mlP)"u(Lo˼67F3˧b< XgѲ;64}|0~ET{uQ]>4Gՠ뽣R[ Sz0ĚMI{iiNK Y(ȍ('֡0ʑW<ʄ:D^>. /[{y$-5lFHI@ZD\杒KV P es\ֵ{ E5IQ,fr\~i?bRƈ#oopq.{?F]kѳ^9T}lċozի]Q%'紬QZD=)fTz }37%eDj]`x&Wc!:CMq3p+LTJQv`Ŗ9O;1>) ,JVښaٿ_.o]/AK*6LtfOPEkUoai71#o'`F:*yėClKHm0`aM }Eor};v͠o8K+L!ؤů%)Yao610p.xjSeZ'ЫVo1L7^_/c2Y틫uo}f tFuW߽{jm_K7A}*bLCYw:a>p`|sL SaAH 4&v#t5ڤDT#yp>:Tqeh`L.fsW-J~0ɞRB,Xkt8 BxB=t˱D^ &c,r}CaO Q8Dews~(oSX.x}`폠m*0)7nln{6>=rK=lvٔ`=_#5>.dYƪPQ2n @ƚ1 =-1),aXxW5Nk9ՑvYtH`D$h,x\dil@SErgAv59tKG31R *8We:94/9A>?Z;Tl<rRVyv4dؤke8kmUֳʰ8&vpm}gYNPG0 [ciO榡ϓKs`"j)nw`Z߹ߓ:v|ARb:稽,⍂cr-NHl@FtTes( _eY V ZrV8AYaE^p.`9VL7XS6\8|UƳd'8(3H2 Hp8n>L J̴( Hkdm?yMꤦIC;Nz9Rb *9c& 8#'TrP%: tH__yڌq}7k'dahNz昻X)VI0Чz /R؛i{K7kmS)k?jY9xSgKBdaP-Ni:_]A<BS<۰@<ɨcJ旈fbFyTy*\iI3 \ITvq w*Q QP(z+w;$زQe##~Tb򱹸0\LT0Ea YBviKI;;f% .zcIT4m4e??i>`33@'7 "d4QHT:UD)JO R~"tNI+pj=HԘuIXӝWR6 ݂st@TL9) DiiLvxpic.բ@jsTŵ?ĸ.n]qT0?)kwm"<Qf`ÍWP/ !ˈ'#,m|wƃ|"+Ï\̔!} ӤS~JOiR4)iK괵4Z R0y&A[#e䄳`"43UlT['ĽGѵ/R%.1/d;|pǒH ]=FROՓY[遪Eu[`#p%J΁,):H!#[M1i< |A8 a;VzB\̄uZ5fQ ewticje[! O! Я(ԵYR01 Pdo6aW[@;"HGG63ErksnCYڜxsaQ;(O_WtGjJI rmJaU\Epbd`ACyǤwQ'%< WB & /a@ _fbFح^y5'^F Ib&weɁ#Uc`?g_{XoJ.Wt,_Uc_~ӛ2I>;^z.,Œ4MD*f!8*40x !y{'cs-B$QmhlŽ8$X:㵰Q=@qO 88Z͟/`S! e\Jn<譇%JD HIS$ Z .di]ijIFF 8+a-XMA !ҀJ{aJ`BYh3SFF(GKp@AbwGGeX ZOTHDIxk,Yn%uE>L&ˋ=n3 RЗ4(! 6e7!KH~E2, 6K(.#hO@ z۾D՛`F, 5zy90R`(l/͗  cvlC``D#K1!XP8:_HAr7&.jR#n>䢙FB'l̂ϘvHmҤM5Ti@Ej9QS 24$%}N0OscԒ-Jیn6#fW 1 DeadLIH)al쳼} F-<_@><L*?҇2Ne nM:Ӿ2fx+GA%2:pWiq#SF)Ҩ8X Afb=Q+ xtDқ`dgL#GF>?˕ꐎf~3aP.toŻ<̏ 1l]>[3n6}Ffd!noېK+pJMOîNj:jW[%;W.x=jaf#i|CtZH~c-Mҧt;۾V um$Ҭ)R:~#}~0@%y0-Sg 㩒W{!}T(#K٥^t%vLb\|H'R_@va)@$0܉hUA {Q̡GPr$%4#L3g sJ(u$PcA\_Q6A, :XD43Lv2G.gB5>V9G}t9#ju+`)bOდFI֞)-TMG,0XiO A+'H }N EL> [rN,/Dӣ)6 cümüoPs$uZq< ױZ S3nhÝ,IK5*3rh⭂ )1 : Y#nXiN c$w85}%K88 LS[E)Ò){1EUIZFpq xeJK9I-N+Y{Oy|'T""pXJkҝ$mx'GDqF@[:84[ট- _id?1]0VVZk'/0?hS@r|RvTA]TtRORA*1F~w8O=<#=Tn CidWci g1 ֺ" DlGu&fx<dCXdѯǰruBQ!fa(k6 =VƙNd{4u' BQ;@}xY׏}0ev]pU1$T `.2u)rp7i'#<3 Ѿ#fBuVKˉVg]ԕfxWhÖLJ61 +-­؋Რz3gO_J.X9!z,/6Dfċ ЙěFzM 7F &KDz> 5>o5fV (:BhsJMNؤ^m`5 8>N<nP:!16 tGur7.I.MBcN }P$bJ /:[8&1TZ) ED-# :d7mB"qy!%A/6*B?F"+SX+"(6ly3A.)]Bt~-B> bG\&T1fzEP$mFP)Zђf!"~D*>ܭJ"Άw<~BqH(C ?-F)$*PE)pWҒT}8pZғ\ Az\`sO+PCHʎl7q7)g^PCȇoAf1V`'-T&rnF62A)668}]V٫"⡲ʵo^E/ ]'<}&gKh^m!5)nnbi۴qq~/8{w^eͼgGuÝ h!|4k&Q[Ю<7`#ohF'J X|Щ})aB0!JTZ v*aR;A%ǚ )ii#Ю}oGvte}5E`b˅jcK.ĊԇI;p$ LwMk嵗DA=M(c V0!.̟ Qn:M4V^ Ncrvb8"cߓU NV)HI= FJߨt7ՆufN)aK3 6:8 NL<9z l{.u6:;)˜5?BXWW5\Qi+HrVTPʮ % UHI,= bhtHQaZc I|n%hj,|p|c(7B 0RǑfǮf64kxUd\d|H;c%JGfx.F7. ܍L\ +K"PQN(f -m ]v# ~w.swӒ<}) h54X '!m'4ԩd\[ 7T] ̨ 345Knrȓ/cU*K#o2| ^dpreNi3 :fmPipg^;#v"B#eã# 2F!Έp*(RN2ˑ#C)ޜZV''<}mZnsʁdHyT Nx'0N$x`C0')ٕK~\gܼhYOrW #tӧ fk'7>QQU-U; ‘ZC] S6xLZ筒6-W(`+k!1 Xs:f7nu^:Fx]*u⦵nc]j4 f_pd v;/p^GV-pENJ*UtbVOAjɸ]^l:&SL$Ɣ @AnK0C8 LRmǴD mhkێ!xs8ߏu0SR;ʝ87W]h04E o\nlw3~k}OYc{n<9ײ5'7IdϊF _(URi]NߺϭrEb'%.tz\-/YoADS:Τt凹+0ё2YVݙ? ewGW٬1EO]?/'e}~}ӯ>Vt ϝQ`4 0%^e&\ G׷8Ju]wF.VS{u& ? ]ŗήRM7~+I7G8HWݱ;FMkAw:4h0TWw< ^hܹq[W`=Cgo$An>.F5]/ FxݟYǗ\G_lcgWV~L:/. |1ۯn7;u7߲iμQt;KyV} L2UV.|{JOJom`?xՇ07MWsӿz1.fu>-  oIǜ3_3 a?y;بnʟz39io4&aw>zQw@?z`_o)_)DT \b9~QnSujp5~tȂhrbƝGWz!/3\>[#ոk7uFYQ!Hkg?*:۫O uXh:PD(x V7.:|i6pMy_$X h~]HS Π0Nd3\ɌΥkD8a`axAp0)[O9-am8sc.N%)aq 3QQhdpQj:/T 7E*+,FL8 tF\ ( mrF].)by}c6lWCGd2$Rq%BZbT Gh9 Mn>We ?U{a\<= wu`Qyͻ@bsSR?V}|D\3u׷q9H{yC1Lg4ӯr|'$UnŢ|[bc;΅ξ^H\/p. \m(a嫯1|%$: "z?/ǧ7Ġb|}g6$؀OQA$BeB.SL H"˷HjkY#HҊ*GG4Jx\?a$FUؔI܈`@i HS*jLgEw@(><ޮNpE ~YNQ]b" rbEZid|u8 I@%>`iL.A%+l:GH2b)Iu <`ާ2W! sM,W- -eF'` }4yRF+HY:>H!c:1$J/kl'-}ynu N Hiա⌓J3@3&5i˔ dxD!簯~9k()+p EMbTYPmD E@^3c](J(<IIIb KD d~cޤCJ!՚ܤ-ȁ-jM=I]nְc3JmRk\-Uk4)fX 5M'Q:\.|G\CvjM1%'$+QzͬGv%Wfhp5͞KXˈ-Y//- *'L!X t>+D:'4֬vqkjGa8~"ot4:k#4-v`@9Co1bQLd32ɕu!0ACoYeh[3 c Hqpw0 )3] t+h;dK@\]YILj-[ ];U6Z*2KJWx%ޖ4&WA#KPكœw\& +Snwr'37im`-n.JhvNH2&97<|Xx"nӨl>vB3V蝟Aּ)եU]RF9(}x'k=dC9= x$ cgBu9H T u5XnnX@LP }g7o5f?ǫٹk㇏MX dDv@e ֈ!O#LO cl? \Ho疉Ћ 7VYˬk˪}^]F?IGi{]`[vݨVJsvM=o|%VPˉ0m9W10e J.N w\mU Y}o,o jF_nfyy09f%=G&{v^{rZ߮tJfBsg]bKyt6J5X9ٜt\6 g?^ˇΗo0zq6rOW@)GpP׷k(7zIX.Uh'tVWEH|SN|z^.u]*㭄MZngc&vօG]ZP~^W+Z y:|i7ÁkA=#[?vW#y2c&#{2F3$nl{OzyҼY5i/ UEkآVc몤;Ano7Q5NVm+֥𥮊&leVCWɵh1H3|X80MfS?3Z~8Z lT_zce{//]h(799>WxEKH57%}S+ZM>ךݣ'hY.m]nfNLN~nBOϲZ89sw1'+ZlMt.^i\]ϯ c~ym$O FȞZs Eƒ+BŹkš9qŬ.vK+s2KÇ?5\[ޱD%c4S+MB{ E9s>>^aVZ}| g1B+gne!E򢇷u+xS}g\򖫅B.K/|7KV+f7R%c [rWqw@F>ZL!#{·ls v͆VnCA_y. j]A8Wp hYl_&vYW4oo,l.'g ՈO[?v^+ed?Z=\C"xY9:_ _@wk`1IUr)J1X ^ m uUkz0%i& L驧H>C^6BsZV  yRDnq lq =D0&0wbCu+5yе0CjC/ 3;PvQwoGT@o' xa<@Gvm(PhGt8myb)=cc*ZMh5v޷-0(QJܦC5joy%0oy1C_̲1#YDBy`L]8.blv X]_dX+_QrO?v!Wo?Ym\3ɻ--@G|n+bpq]Uq6f!B-} @v8r·SZJ|"Z -zj\~| CeX墣^(&b$-uy-LR< ԓ%gF>qmp/u?:@U[\$CR5ʅB+KQU[\"텯T)T |N[ͻ8ׁW4 `Fb?M.*fW0pQn>3i1sNzKHsCrdUq#&@G-](IE 8#\Bÿ Ѩpa6+^#PxAw41"pfh^"!ҩsɶ a2h-zfj\n/uR*M,!LDf N3V}$ˮxBH{@5ރ}o`_༷}&7#9í9~ԁKWf^#vd4II$)U*a:M`$eR=xfX@ފ6T+)q3ёlh5"Hh_p JZҁ\~ <22eL]1q cX3Ϥ|'hi<$&h0xPx)t3@5i=\][xX2'8!{AFcoW<\44L@i]Ty;uqy3޾:\g_q=:hŧ.-,;CCg r]zSrF]QS'tq:Ut>AiGX)BUsq]A~ihz!yɿ-.Jvpi8k,e]ĭ'hK'Ǫ|K7x ҭ+%mjZS;t]ƾ'_Vɗ}jA r [hW:e+饪˜Ec2x7NPAtQL&HIK gs/&IMDkuH} 6\O^M-Th;@藘'V(BBpMCV*ޢ3Hq1y@.jo"J1?n]V]5Q ӣpWwĕ47# SZ)cAV}D~a5t8{R؉L1ŬLW>|dtXoX7}GN R嫯8 @mNFn:z! WJWr)J y^/ꊆ<Z:N6oĻ1n#B3N4 ThmzN`YyCuY)ҾJ9CW#t Cd ;%?p|SiJ.HSjfjf $Jia֮I@_i7j^#]-!4{2Ջ4׬rJ%gMBĝƄY8A("; ME!<٧]+,O z}=NңS1Q$urQ G[^2]J,@jܶy#E`ߑ:ǰ\ X#ޥcYRr^O!}mV; |8,냄=kqHryW]u)QRHǪgBWXwe3+J:4^.&:Gvej$]hjd"* ~:GrV=,DLtI.@k\p`a[i~HA(mޞB2x@lv|WI2z̲O'IͶ)Wfͽ\3kjVMW˷B,7Mm*̔VUrKE@|馂խXLFMՂ* *L&^Jjs IE/SjsBZ3 vqP IB0QݵnLR"WZӫTtt*]M6[(/`UFp Xy//^7b"bTơ]a ӄ!e!P'@Rˋ7^T3ůh.h@4INUDtDU(tRtaA#ծSt 1h {G;G5 Ĉ)-f:kU(LBƅ}~Y8B%C(_*ZcLs;M7{J6CC*SJcb EO_QwTn-b1Lfݼ-kА?z8+Y7֏&2(:bZXYM>-kА?^SkA^]n&%pz5)y!f)Za8 ^aiJ* C?X߯=Ϭ,<3Jwo`5*%a(&y=)#0('THc˵$!!BqpeDHKɍMHiA0KL0X[!\SEq[pjp4nI)!-n )ى!F 4PQΗD4L͍}dr1X|Z,S8LHvvݖџ+7xe+wQ&^5@( 0aHrUU}ƀ*HCczP4Acq>Q*YWΠ(zoH(' ! hwkkVbݛR=Ϣ7,tT\g\g4H}H]{CzԵD[!+HQ`;==Ynl?D1UT7(-^X6),܌r"Q)NunN4)-Eկ0MʑPo'W+MC݄BҒ߫MOXޤp4C-7&#lҗT$8V22V1q$ȘR2bscC*QVK Lȋ* 7JTλ@պ$z`HW \np[V:+U Jlg,%>U\מI5S2 yRV4GÓǬh;-In>+OUs-)X#eS[nƃWo߿`^a~}< k:b[$봿m/H/$-P ÝCrB \[\B$&]]@/Prg);'0!x<2dvlhYnonaXЄ,4i%(9r]OfHNivPÃ3?>9' 1BYzK#yW{,z 589Mbxn^7Ì#zH0_סT ,65VlN}}ɳXu[\ny_Ң=Syp=x{]wk|KFqtkhOپ yMb =LZ[пf,4nq䚞 2Zru,LuŲVИAЮk]-u唚Qa:u`QkՅ-kH ੴ7iv5j!*l6kJC]h5m3+㦔`ukRr Iga)* EC$7_Krr1 XTؠN+eaݑ?o_7c m.ƣnzpc{~(SNq~@4M\Ri8a,u$ydL+Aia!"aH&4 q.q7xA2J]eRj1` E:js^ha0_~^!BBIZ#oSƪyH0HƉB(T0VKBEF#;i0[O%ʼGB꫟~gdO`B m70ݤUW' l`rH{>uȋd$d<jbIu= W0-Y_=WpP |{悛!@,P4"V z]08Hݥ?{fh2Z|(t-V \,͡CNl<`5\d3蔔*uO>!s5s=ڟ*F"nnL&}NMW &,f"X_dR<Z(HhDa)1Q %81B'i% sNZ+%yrLK.Ps4䒤vu[Eӛ:|@#t4.P?G#U\(s5P~k VsZ>C;} vl@'&{Cyqu2*HߥY#pmS dkJc!7LPppLKD4& C%O"qV' ;gd8!}Ðj(bcQG MBpb޵$Eۛ%a\n p;"hv7]dI3 UE,RMJrL m"~U]]U]]:*%Nm}%0'a ꬳQ)Xt 9Gd bkx*fFCF&h{'on 7^ED`bC0Ƌ1X1>7k nZnFdc=@^Lӄ¢Կcۛc Jq緟mARzzrZC!'DIkLR;ߒJ.(_8X4R~ @zA=q<O++@~|e#J|khBQs3Lj#&U]Ĭv940WuL'(IMe b7G?C;X*D;N ;Q^fM%z*!ԩ%qvC XnѠJVeԠZ^| S0:DR~$q0C%'DOI!.zLGкD=DK@K7aPi\$Z;Gktf)7hctJa~sAvܦ: H}κP#B5 7oC]; 4Qgk ASIX[xb} 'Bw͜t'uuv{2"a%=bm1Iʣ4k.1$ZOH|}~g{&]Yo>~:ٛ#O%k%PG0_#z[nxܡo9NbD?\Џ[󩷃B7~ 6sRuvnxZrϥ3 +) \Q8fc/0[R,)9M-Hfe*f _nKÉr K&QV)ڛ ths]t4+xq[l!lxzx4XSJa,7V5˾[d҄ͦ]l82&!^pRG 'BQpIJ#[R&FHeViW=$͵ڰ ľ7ޯ7ݮڡ\0~Oxyr%x~Qܵ_kP]j͗EMNj pZ\&bfB<|+Ìnb>fm-ǫOv}u˟CW3ZQ2cT64Wv%ypD3pze5hY~h}8_4LW\UxP&ۃ7hN[U/wC^&)=bШ FlIlXMhE!: KUJݝ0"?\[ JU$i3&U1\X^Kj90nuYA8Bwth,wt+ͻoݎ)yk`9w]J@=b ADwlA(8 "><QE܏ol*URK@D)]|՜͂1Jʩ =fbV-D}/lj/Ys^~,윟[N#puA}.>|!.ľc w?._DAC{Y=?|fr6);:O_v6Oz4_sI\(  y*S5b[,UT'uGd'nьVnMh W(җlY7LAՉ}Gv8dJNgݢjݚАQ:UZŶu^gnTQwԱngG) ɬ[4U[U4JJ{%.5ǹmݔAՉ}GvƼ5TYhFZ&4䅫hNivk![֍ "141FvZb5!/\ELrRd浂|d9uf{#o@pYc~c'@*+^/ =B=t?{MSN #J4 l5 'XW+9MB'KyW25#.t0 , !I 5Z~JzL/pW`b/9anL)-<ј&=/_+)8C9#!詎޶<*L[Us*4A;㌬h⿻e?_]ʫB_]-->|nĢNvc!ΤT WXcR 0:* D3䩖Jtw0$r_̺xbQW³*uQ蒳**WvSnSSn-h:L),% d ^Q @Yɷ?c'*\qr{{LƍI:`Ǵ|0w{>#v39g6$vW xQKMN@,6 !'N)TP2jK WnEǑT!($IB5Yi1v[̌q%2pf$\/}.}^LS M^rh0AѨ:agKa$5Vl'*'d`g  $!m" p[B%]\Jak* 88$U>HbIX0>Fr'gSr‰ L+p|%(%I|ji2W9Z NaG EvJUdJYN[NŧXh0#F/ҍjcĤ]qEVRDZaEp{"l>m@"d@ ץi[k#h:LQxIM!^*SPJz#R oa8|2B\j!d78D|1\|2ӉIOBj~hzNz8c^qAW6cQ,Fb,zY\1CqA_|qE=Y P.9;Qȷ]Y@St'{UNUfB܊r+Ś .or[KmZT&$'U%j6FmC~AH3WOj?4ԥԂҟ|9ZMH@HWҺXHEGxeF}[5u8xM^,oxs0i$۴/gV7WYOIJ',Y_ mȽg}I:˄'uGs 1S kp:f_|>҇. (+1(m9Ϗ*ǛtVt ҆|JS*`.9$QZ]~P @w%d/rG?>Ip9P O Tv T-3*s nb?M?y髒* <-OaQS_P[q+2 Y \:w<; N<-%6}TNSSZbَ e햣}6v=dbE!ZrԞ>2f]5*7 lm]$*Qj? |*~Jҳd=;8hыU')*n1m?.YD2 07yH8ms~:o==YXƟEBB; -sM0hwD$E1bygNnh!&&p%PM]~雱FfLFo H8n5eW e+^He:ZFLi"Jm)Pʠ؉,b}-wKv\F ɛ!!j-[iֻ-wv"DvJ$p$R{ {J"z}/-v~)H\\M7B`ygs`ʏݘI0"cb^[uN∖VSF;lTxO9"KU[9Ewֶ'0Y5cq⏛"@L!:tn "rEXY Hӱ,k1P[{zI0JeW N3)xcBӄ-kB̞wIsO xiaFw< kJgVB)b ՙLYߦJP bʲpnFe),./Є/SU>}cU߲dtwp`5;:`h >Uwn<;l ,aFOFQZݯ#J#%AWӑۍr<.QeL=Ԯq'eY0t )q 7`O%NE;A #\w݋r D6+뎭DXþz(c{Y>HH@hJޝ C}nCݎھUuu'@@${㕈 cW >k6uB~"Q3 C׾'Kg/$I}-@GI|T8MPlտO6|SMIc;gTK^C(vnQzmj4ʭЍC =Fxg0BW؛?)]  UH>pC ̨VuHiЏ$y _~!S z)B!-HU<" }K>t.:fq8v("Y 7k9 ǽM9=M 8E3TBdޝ$2^MVozZ*Bq".3u7T;z.BS<:-38[eD'tP ZB;3T.DRc\K6+J #T_]>=9;sch_ kbzAAΡD&b|Kc 9?B2DY+x%twҤ]/Q Eefw_-%\ijXKOFc>uHQ[-ElL膄c*KnZι7A"޿\A.LS=b 61Mb_n5G"jaFKBcM- I ڀP@ߖ1E}s;F-( G`R^`[sIPTGԮec)֚1 N*|X(?ݨ]_1l9ľxHjZD`2ōX!`A v!e' U~-A!ha #*5#yع3na*1 P`O!^mAc(RAK7Ba:۶6\ֆ&dNsJ`̮3# ^)YU)w 8~ 4V¨%Qq^ )"{?+Q!9>lATb_~1=QT ҟ44(kT0$-۳ౝzb^cxQҏ;l(Tp0c?#FP+1dol2J3=ݏ´'\ M6,i*vXqύ{ Đ$wʮɵ,}f@xlI(mz(!BɘJb;I+ yl.myM&h垤9x(i)&2akv*Xb.dwe䏘`wNR%.O0J}q8IMzI?G)N5$@BB#݋?mOWhRcՑ{:%VA@xRIkICQ"yw8ֹP=%CoM`&`vOĂSu8=c$=._*;I1r훲e@"RlkGC5BD%NUWx-D4ZЁiWD^uە,[Y\8d#@M*گy0#ៃ_NՏgUI3[bݹbQR| +_yHH(% ;q67ba/bגVgW?[,g'Sr%c^Yу~6/~qǏϏ@p3p<:bQ>eX\;; 6hӐlPeh?=Vo]owobߣhqٚe??LvXVm~=\ t."R]3kI!0`8F{ d;j{]{"oQjч@F=GT3< "[5+ꆷkwqqտVKOix,(E k0n|iYRջO?(8Pr˾ X+ {CYy΁1]ȋsݝ h%R ުN C F>rK}uhchABb7+.k-tfy4Zj@0ZZ3YR:N[Q^RN2&'#.?׃N ηu=r)_Z^\*]M,gv,-eJ`qd8+ Wpyq4`|)UCQ6 M  IM\ۆ{H5 g޾se(%g2SI*vUjE4o KB?t^`НjPILj !b"]քbHË0!D[[Hf %96!"2k01bRFTI&gsDQd!.4ML8B8bR"\JN%>^_M$#eBEFx0ǜ Ȑ0`xS  +Tdi\J#9LRg" IcHg0ΠAad8EO| -\VZF_g[ b?x*$lDi@+r+Zv+<qAЎ2Ӥ"ϲ\SegIY$@MHX+HN"G7BDZK1a! XhR q^ H4%LBe),E WZS؅3 +`M@۔/ SUx*w=/IQJ٠T糧Gl:ȍT~2l25t4(RH#%6dwJM(iX`T"DNc덷3![71PѰ{m%ح&Af٭ٴLNJ,OF=8bN,&f5#kJ>cSm9Vq6HOɝz4azRbOX~|= \cs0SyobrVw2˷;ernڂa;6,xF^ÃXJn-GfBf-)ʓ`\''1d䥤͆tehN'5r8,4z=-}ˉꝿ)[sө7wW_}>4~MW7P| ߏ -*J[̖Wy|lHc}>fVI$a}X\V?:O 7\[ E|l交fT"Y8]`ǚlqgqIPgi\L|Pͧ>_W&~C&d`n]E>Æw< e9Orn\'?lx3}Qa@#%|udw~bSsyou"X,Ҟs[َ9kygbN4Rݍ)*m,h7>% 8e&[4dFi43rAҞFf<*M{-K}ʖ?xPɃ!GGe3'i>*sfNzT&HqD}1!ȣHE8uD&`zmKkOS]bLw#V/~ƩkGLy4 8M1=O*eZї`cp: ]රG;p|Hnѓ zW?:v}f{8;Qnvx/O=c,YyB%w=SN8Œ2;! d_);~zDҸ݉4!2pCQqﲓ4#*sUvɧ\cQpWרxT0+e:*s$:ƒiiI8xٛKyJ &߯>|߿yWR{E:Mq? /bF3 [;L0_}.ЩLpyΏq>띱?3rNRgOӻpV@ߝrAuFJ 7 ӘZ`#2D*/ d e߿#B'iUs`"f<n[U!61u7yjI۩;6]_acTij2vJ;qero@vɾ[f˻EV|Td3R|d~?nΐfz?7=>fnGog~W-Nu^IAA;GL^,sAg1^ \j*]]Z6Ϊ"]I_&rٝŚƥu^IO%JveM(g.Q2Em`d-!;FIA[4-TYsά⊌A;FsMb*3gE4FhӋw]ƩvAĎQGhXL+ʶ[4-kᚂ659Ϡ[J"\r, YT`(֒&4fҷ8E Wηv75?gWcv՜04==k:7WV_,b?BLpP21P!JIs%O&r(sɔ gX Z,bPXL0Exƨg-37EbMx y"#So; 0XNu[Y׶v&4Tg.Q2%-]F H<1hOȥvݢ n)$䙋hL1дp^ls0Y_̚Acy~9F >%mB3H3(jzq] CN +YLhFJÌ/#p[vL 2,@165"@P0HЂ`BssR0152HX^@Qb8V<"=T ؙ%A_$z)nz&I !\Dd$;ڍA8 X9"ǨO:z\['ON !\Dd[ #:˃щv0v&4Tg.eJ0 l5xi̯@V3MVNVB,n1V;%eu}=o;Pw,2n!w=L_Kcן z -tr ɯOmD08W kKxm{w Azy5v7/>*=<]<1/ϕeX5X3@Ș`23 ЂA}ڑ+dxaraZ7At1qVw*+LU =⧯'b-B; CyZ? c<ǹ׆n 啙UqX2U[khqgye6fINՋMh i@{Z@`9jɪz|#IGє6ٖc & ZixcJ Ɣ.-[/R*(&IF0$Wk BO;E}l"`_G߯kOٱ݂-Ǒ̖.\W`T.37#t1X ws+bl_nQ^Ey奿(W/Pȅ .T5hgP#5#LVйPNhFFqԼ+qy~5p%Wi0؁^V [jǪ.5уoI qBt1Y_R\7S9znG7ۏ忝Z=8ғiw듳߳t>s"s' UeMD|PΣ! Bć%+qJVZUNFNj-}gJZ>W:[%= d#S}hn Xc9)ra3ۤ0T"cH 80Ua\5a8PV\cLќ Ăy,;wWUogg!8!Hc-F@Iɐ@X -8V &,Gpd-32(qr@%aS+XIBP$F" + CJk@.83jv曓Qhs=5>I~aVF`*:G_$1F-ҿsw{ r|Z3~˄N[==w/rǟ3-+}.>y~Qe5=6f7 ߜXfXۻ9.qWL;7+lfw7(CeۣD"Ĥ]2TnvVy-Y" L8ڙ'.>4c $ e{n {oϮx滮a5ba,_A`<IӟM6?c,>niոnH++=_ßN&ԟ|;DM 6-!X¶TmlՓ1#MCK·a?\s%Q(<\ ȏPIw _mCTy}C7u\ّ΍Ҿ͖G"9Ai4':3[e"l9rLWvEzS{1:b&3KX<\,x"GN!&M{muUl/6hdNs;)ٺZu |@Bƭlt5ϔa&{A}zfBjrrIMҁ[oݫ3v€P!0Hhmϭpɸ6@ *\sH?~[nq?IJ䩝B;-j[n~9Z w }sU]?qn_c/S;I PNr\C>֎Ѹ!`wu,{&;%xqS%$=%l<7Ն1֓1VRΎ;}دz~1W֘k؊tB8DVR"ZckX.2{ hkli3,Z^S^9HfrٶDzn<蟼~;S;#'R40聂SO;aՓ`Q춭G5NʿFPo̚{lP̫&*{,8y(ht*8~O3_^oHI1Qe+wNߎK,KtnwuPnfI6]lM=pߣ@@?p5/`P뢥^ GceŸǧ۷õoUhvMkPj 0cA>0.d\-.c 2  &ΐ@0Ps ̔."!퓅`HP L(Ik/V\B1AןEXYb(U2ݼcs:&9 >ҖdTzhqHD<Hp[3QF"ˍ2LB$etΘq%Usdf%x텣2l͆CXw+k4K!`!=꿛3CBQ2J=Y$p ^h6+n%t'،ݍ)AK,-Z%Cde~4;ҰwIx7R*Y$hp  D08As=:8zXtLŤI"A<>0ƐY.=ۻz 0tuItЉRYkECk{O<|`̠)ڂz4?@R${XZO?nWOwi$J?(#wTH =E:.m: †E*[J.+=cO Al0$w[ nTa|GE!X**n%DztCnT&*ӚMī;'q<)YeuMse̊ȸgY  J=dJo I`zp4m{5`I1(Jc@=Bڋ[6"@ޟ63ZԈ cA=gAxϞA٠q}#CZ3 '` `z>3S/r 7¹F ,6T"mTnHZ+`c4\p("۬FS5+])n_e[M2 _* jGٷA9TKn@qA493n(R-8lq#_e1cgDf0nΠi Lfvk\U*_ :TKF.Q̌3#"`PXKa%o}>ZsV }`͞ =7[-bMD6XKdKE!jڼQL"m["mf{ ;ԖCs<!ҙ@YQS B~zhG)'4ğJawF! knHf([# me8=#h1J5mevG HU 6Cᶩ=riSmj\CEMhhɌ5)G5w~T`,7 WZ ٫@sNv 8~iq: w1C[ k*&UaedZ:_F/ʴelц6‡%1nSev7a}wnkzޖ$bV(+]]|#=8) CN'4nӠEA2XLՄ`I |tL oC<흊%"K iƏ\k#k_AV_ω[&Or^A3邟lt2}Ynq.ͦw~l.~d¨;~ j:Y<eG\iTSFJsW<œcLp-wl_I8t)(PLiyV ЛB*J.d _ՔEφdjVJLbǚ qD03 `tq뀦VƇoive=q2kRP+QiuB~`\ Y*ah}Df5D ;CkR> I)'$qAt(e{^"ը ҵ*+OH#<܍zy5%pAfNm<@EDHxuG# oqnR-?^1.=LD:(XD:}I:_ m5$;xOXiyG"X;]K#ޑENFmt/;ɖ/t]fV:}6J4R|0]\`aUcxP<Gl%:%Gk9<_8tzi*rY/lFwneԼfj | ʊbE>|ؗ#~~ˬ,'M)~3_QoُB=󬻽T(Ξ|gDY|>R)~r寖Qe n񳟏+RDŽb0& l} yd=^kdʮ-'p+AhzHN#pukhPG=828L&w(>|׈rҗv50Bl:2-A0#As&$f,S6~ 4˹~iiv4>.%c#Qav 4Y&xPffm`Pi} *SvAٿ&J>dtE?]9;}lkB [ILU1)p55Zs7\N'e EX0aƮ$df1V V9Z4Jݗ/c [>&p89$=Ad hN>Z9ဋjcmPdF5[-1qurIP)Xy3 +s4AQ&@p͗˛EC43HQ1ԩeL -HЋ"'rP> sļR0WTAIO"Ď7NR;9j(Ӧ3 _# 7ij',*)me:I6eZ:+sN%R0*JM$P\qTLd?@pUId3zBHGj }0Rk9ļ( y5t1瓺PR?goT @ V7*|4'?) I'{D{mYc:pg`Z_-+<^hVO](S?pGOůuxc3}{umGOws34\V[5-F x5ǯyIƜ1Cwcc! /o[XcA)C~O 2K >h. e jxNzORHY6D5\v]kL&&pl`r2Zq, Bb)mlV*,%Nh^nzi%VJ6Er6Z}qVX hJKѣG\yA3ڠKQ&F "$\$#JgFV(&Qkǽ&7:cT5u8NplhK*Zׁj Ar|Ch0#M@ s b􁷹AJ}zTIJk.4ZpG[ GE c+1!dpdo'e>ϲ2>eІG]j,t|z|\엣*-^񞄵镧^^ &` z ?xF1?h//_.S>^{~2 U߹yrF\N&/o.!\k(gw>qpdpx!z7. FJJhۣo*DG#z0"a(x7u$7`5S0hdSi=>p2f% wnZh[Y{wZH/(#?-ܼDYtcui*͕ڿ| 0`I߷m ry6&wlmB%[N}=gje{osJAXsaA: ȵmFN${ij:IT?SDhB.i^AJvD EF4ٽF3ݷWٗ`ϛT7y^eWq8'jDpvI6|K FU/ zAp!bJ*Zc#khv@F+DHf6객fRfu5Gn{'!N5,37Dz= feo5"/\P!dEn=$9yƊ@^2 & ?!)GS96j&cU6J`#B$Z塦W#+kn8 `c/3#jB#/2IxA397qq  $E#2 +_ڸb:i4s`r!ܤ̸6 JY?z3?wMHZJs@0K5H4jA s lCZ @),P9OҀ6w 1Q AV)jW/6H*ps3Np؊:itkuI\Ha|[d+y H8VyNR>ˊTaiΏ'ZƁKy>\դ(뇙.fE>]wϘ?94La]+:rxq+ ;H s1 3bpd~38_5MR&*{= n=⼄c-+8hO<_|nFm/Q/wk6?@u,A8 LNwu\.kЈ,4*BFX,IMj1TtOEEi," DdJqAJv~y<5p4 7Thd|67ۿ&Bt*"y`vT@O9AűSnuTr]5|Ç3YNDH$,.̳D;A)W'n Θ!J'U `[iCB>_Bjկ+WH=̅j#cNtF++{|( Jqjp Fŋb4D7|\. R:[-x;\b\rdW0D!H`& ݗJb*Rӏ.\"RYmK| Yi^ S|X#8q2ϧ&LϏ97~W䄐 \ >͑-i5>rM#YGU)yt]hH5in$!m;k:Tܷ68_MU:fBzn6 4 Bv>i'u5 DPZY~]Ee73&(RiK<YUEk}1G"s2D1`e߾^#1r2#58U,1<[a3dx8K%e6 W?ЯHG52:bZ@ C4 wíLZURԨk[C*8Sc-*'Gj<H.@Pz G2Sܭ@aRTJ 2 SExE8pA"٨w߾{r gxx2vϩBb]ݖ@BsuX 3@c|'#VG1F XQ aA,ڻ?N7&ۛ' %$X: 66M%dBPD)C1tiT1% (: d9YjO "l\j^vxC$)%$d*ա݀=p#8G$Lp ~qƽ b1Z1QBk-Axj{:wc[ %ݍ&EGU^<XwvnT2Q?9%87\iQ.xwR i4 a" JT>@k/+\ЎH(a;I%YX~C&~Xmtw*0k\H:eh9a?],dU&Pyq{|xi{'F] o|N;'Va֘Bv_}pȪ,7w9C[$RSSK.VV&H߹߹!- FNfgwLjdvڦi%-F`'/5SSlRp{47nڊ>k5b=+_ , Z&oiFc_Ҍڬ%^{9o8%cIgPk.1+ 5+ œKvgMThPXpDz|34_{l[]ٴZwI]6u+ʅh?!sbBOyZۥĄ ?uX<t^N<0-] N73Ou[.mY0ATm62=#0.Шv^l>/Aې|"ZKJU*YνƬ59һlf[~(r2%\5F 8/1߁F+T3+uy Q%^9fHYR?׌W\DO)a8PcvZTgL?AQBKtqȢ1Xk{ l:hAz(Oύx޼χ'?O>HT *ER js Tp>K^ ku8qnۀAJUvHiSi3X2ΥH/ȑ[eT{of:]f=o]2p+Szh' ܬ 9DtugMFY(:0j>7I: \<_g&]k4Ï$"R_ҴUgv+P[W,jL3śxS"$7nI.Զ=&k:/yۮ?\| QZUw2™lnuv!tyzMRq@O87gRsGf!U6 $9uO4(1c,Ό0$/RhÊ؍94aR哔f$Rd |[&p n]x.oqpE-x6p. #c~$ًDQ /z^ Tvpar¥(6`2Ż(HUbepzFy*JzL_]`PmP]]kIfW)Q';@m!t]xnEz\W_&]ᢣ~ l i]Kٕf_fJ+@tգ\!=-hkJ]s KٴA.c %X]q02UK+T6lCS9RZL3Љ#L${p2/.P țPd1Mf@^:\ajǻG3 jpu"j:+to1oO|n$Ep' JJ7%@RY*?TgJ@߅ DWBYh.f`C}:s_:ph <0k3q'h6b% ,DRMAhGC-."* m-趷*),J~0=tH&9֪:],8&29P,ro^$R:K3Q{9ԓxe,%*KϜqW/_!jy|+*8l9#[|sfwPfwb*H*Z}SnKLLz#1ia2L1E a=&Vivި2@~8IגI xl$b)l5\Y5')#j4{CSȤ ïBkX!Sx5EYQ%F`Đl"F hg/}g&SX %ƾas{Pc^S(o[ELR;oO? !IO "I%7};!^޻>'z+gd7#ߙb<涒@Rl{#zR/Ӑ߹a;a$5J@,٪P5V; RУ~T`&B ,`6xgXUj0>Ri}av + qgHH['.8D S1d$Yw\!\ |֦?T^|sycOoon0[%yf?JЌz(J7?r"y(t]hO2;I?` &.]}_.]5]w5˾r5.)"M!!+CB0ɖ qG;h"< GrfmPS(l0uXDǻ0Tc%sp,_~': .mYk+p;g%hDU{m-Fqn PҮZD^Yk{L)"#\("u K:k0MaXf8׊[!fIkPܮYn,ȀVe6Mu$1 ,@I1Bأ gS*%L8HhAn 6Ƈ=.g">dF*d0lUSq6ڠJ,` 5gٖbK6[;"_AGd%AN \kMsi-XwƹGLWˬCa* uLИnU>pgZܶ_6hKf!̴N)}=$j*iv:RZRgbysptiPϳ>9qe+xRGJ[F^3IA>{t:o/;a~> nqg~}Xp 8Y*7;k D4 >p^ixHM҉u"Ɋ6 yA!7Q>S杇ߟ-֟dwY=`D|֓S{5\,j.di&+h.op4KⱠ)d"{#61 2R›yf ll5%v7vaMNWa3bh[ȑu+񽶯Go'zW$%CI/ LXd䮼t4w}F%t/M\+-/:8%Lў1Z<^YVdz1dh?UwPuC8&RLG s%%E['(܁,1r*\|ۣzb(zaΓMYj[= |', yҩ*$8ϙ]KA"@A`P _;-9V ps HMp#811~IWLziI# jc 6R%#&"1b3D2iݽI,g-z`Dqoc.z_";׊hMzq-tq\^_Bw\;v\_cB0 s;RCi0xXoCt!Vd!(#XMM,r!D`n.!;܄\^Äw@acg9=zl\wHG\`cq$ظPVH#r?~!`F"b/1/K I,I5:& ~J+l~dJckP&`F %~bL3y{١Yk)Ul_ޞ2|Ez1thപ 0iIQ$4seN_YN ᐰ,8⾨j0vtf[o`߄OB=J4;Кݓv~7?}6Z&gww]H2/?yDf=ќ~ge1R% 55*@{@ݜ//>J<ܲ]\%6<6NYK?}`<м:aRphbȌ`.wܱZ'Z)$JaVb 1qQ,m;ͯ5 5wBca=*!Ϸ}y+|m.{/.-3\7%h9$n!67tS:vS^k] iBZ&b+#IR%oEr+ôXksdMԃ@/fMxc~&# =pd(MSEP7, sib2D çi$q"%NLZdsz0kShj%/ix%J˼vHpyC[1:TPZ߁byyjLjUʚ3iHGSܻ$Cqq,!:?&KZ=Y9V]|G38^oG?f׊iusl丁7 [#C_rDyO<3>hI1+c*BI gaYfw"$3J$81ųG$F,)H9bI#l"cӋcC#gF0MbǩP((*t^)ԫ0B#bEXJ CQ8K.Y5D7@m3&$l}pR&G HR$W! NYahP`!'U FX}DXS(` >h91EZ,=NHxcbuV-X*#/Ps]gcH%MhK:p7~ׂ[>Ηeۖ,A& 6 ףobM2~\aݍ׫LY mWh!o?ƿ tD1V궅l,=9w\&m=nN E)kdO(=(޹(7QPA Wz6آbzGZ ЋE湙׃? |?Dtl2d1J\'vؽǴ.61X ,g-PJano, Tm嘇ϫib o+54Vjmݫpodq&)W, @2$ޏ_:9)Rw{ 2AeF>>NCW{aV",fՃ 6#BSK)Z/wB^Cųp<ӻ 4lG 5r9INkuB!^Zų{ ,ԥ-nMKf@3 % &ft~^mZ+Vkpy8bk&MubEY03, C+,8Tl`c|.K.'W;w f9a,,lVY)[Ω]Ivq{9RD6᭮9k#" -n1gة4\$/5툱vwA~S&9/SnwHj_&redBW}{nDN!pRU=wE2ZdkdN^#H>DG{///G<Os|N,. QҫM+gɵw"y} O^ HVꀕsISOr<όgjةp^"'%VyU"yi}i4x] ^H&y3ofYof^"'wŢ ߣ?hy;86@^]]z&y oys (yt+k5zv=z3?_Qhz%} u̗蒎tk~n$rWuv-wC}\K!W]^aRY%gZmPJ˵a:OpHTnwD#mJuCܮn}[pO>J EZv7pvJ˵Na:Oq wS!bg[Z<ѲBmhYǾi"%Wg*++:ef RWVie^;Ce~)T=<^ki0y{ιj#^M({wa}]vOa'^hg˯Їd&Md䲼ttw;Tfw.cKt!iK4)~9z,i5RJ"ȱ*9K'Bc*.e ؇I'd<0«OE4\ s"OAi=`o,T~?q32@!p6)!Jщl#BI`4hѦC6:Y({ҐH؍ L5ؕ Ժ:`\ P1$u&!qh}~M-Ή`t`|68[01 ׌o8:z#(,'m#fi=CPs -qg1n44Ɯu/(rp܈~N>4ອmf18!dmcȡ LX=pCE3vV;=ep(C_:[}<'~p@F?7/hh`(IzIۃs 5A3p_> MC277yK(f)C T!$jzʐ>t1~=fSs>ZE$DQlYr='0ZQ4ُ/X k(OZ(r%*{od)OhgnA@}7 tzt}z5^}\$wXX2D Ҋ ͨݐDq1Zxrr,&fC e0F~:g0|~i]=U}&:IXJ҄ R# 8GIb4)8bcFR3 ~_"/ӡak>q)o?`gǹz,+W?^}W?}^X(I6ZG7xW8_ ~o'(EY2DF'R"l #nDLY$)"k{AJ}D堖{4;oѸڷݳ TH҈`uK'X41ɿ'F@H*+u<*ri3D-I~ Rƒ 9Cr()@7HHy/2}xed(x7`ʰZ̮H^%d Q'4EڢOQq#wAi\ JI]D)0wFؙdBj$l1. "(Cp) uV'UѢT!HVOHPKlt{Xh^[ӿ[Ϩ=_kelE?ӯ&_i?GˀI d .bhD.xKzsK[Pt%Oާp2/佚$r/:}I,X)3 dv\q]u+֜O4x}\})ch ukC6M] Jڨ>nu{qNF4Gr]Ih X:ﻅì) )en-0`Lr9:.'2!(nY6ʡu#D-|\)^;:<$""5ɻL{! (i)zיTRS#Ymd^Wx <'#%L'|0pd z(wx*qZt.M7J5'\"i4vrN9琱͎L_s7&l"! ܓhHRK/n,m*:D Mp-&FR9 wsEzo~5|4`4-gFٍJGP^L%Ȏi|d]TF1&Wgqt1;BWiߎtJkk,!/}]'iyoJ⩨l= dcϖ%Ed|+uίhKF~}t\M_RzT7i^cl)~Hȁh+M쫗ߺ{֖MDXEnlͺJ6n]Hȁhҵe_e[[4kbv)<+޴uk_3hhݺV2UkuZp֭- N5Vn5־кu!!.djݼԽnmiDtڮu]etikDZ.$EL)j}ukK&vU*M[ֈ6n]Hȁh+B}fu/ miHtZuISȴ5[r"LIƘ{P%кAi*m''ܚukhSօ)4?wߺI`Ҡ]cÓ)=߭5M[r"JuԚps8٩&")`PmcOzBڍZÉDYݻ>TZr1>Nu R}%pRSMS éN5Nɬ(Q*CKM.o$ ҡZڭ&?(mPYTae {;AT(*ơ6T:r`*kVs9TֆZ`{WYS֡&L)PYTػ5r>TֆZ]eMi2Cemu Z1w5Mv6TֺYV֔-h*k{[Y eJ3uKbD $0YЖL"#ɍ Q .'i$޻HlrI"y($Xz'B;,dU"*"|ӎvl*f<0')axi]$M 8t SR>% Z$mtY EI+wy s$N{Fi!̦hGB֍$a`%mBF1 m m% %VY3Pj'g@ hG.YP A此!vE%HP)ScFH3-HH>(QwI<^*J~{\.GrGq<]eKyk]}]w6D3m8wsO:p+a@h% Yxg)$I; Oތi#5>/qVNmݼQmbw&\ r޻Tݦj:F-"4M, ѷ.%U\^9nlq2'}z/a']ǻ6H(A#.q9Upa pDm7 L'yksUt?MS柟O. ir)\~OqX trF_f>/y.4 I+Bͪˋ?ҏ<}}<ONH 5R<1A,C%̕5@jW}aꨩ.Eɲej>*ek%MadzC!ɻJ>+Ze ΍ reEX}x >jjQo~z}*2Zt57ٿGKsQ^FXclAc NW*+QIMV xuяm^)@}3kԐx>[ndmSVʵq$w+(@.X,B%-1O!$`DždJD?[PlO%C5_\]<~5.zybS3mO aL4XH(}gfJIv5{5c` UDCNad>eďM52˜J pף/M0yj28_N=^@pGXdoDP_>?+}j7FF/t6v":;˫c_/ֿsfߙ\엫 |"LoΟ %yxSgސ3qCoeGNXǭzb(Q1)3GտDLk4r$c~2΄<C`%1[2vhRE_NE_՝jA.v(Zmn.5asgU CyMక @#7#hL'cep1tT&YR2 $(c cD?(Յb"p Ƚ</(o-#쩷xܑNׇCda۲ a& |{$>&7_ZIA_!bYϗ? Yhmky]tɬ̀ ЫϙBX m Z ]GA=B&T^l\hF' f*O`@d>ΚCd~`J_MwVځÝЎVGEe,ʬmL<%̈́(I~3X|gAOh9(XƆ '; ÄPNh S3$mjل :JY$˗;:h9mKIet0Q;b&  z 2ĝd-x2 Va`9\ݥ O9z 34G{<;o^Aٴz1JJŽ gR1$Q4ITV[V^ﭓ|!{{ϭD)dAY0=$;)Ɔ6 þ_=}XIJ!); UIixI=Ic5vw]U]U-*0C"^6[uTmF3&q^SNh3ڌȪ"R*/osR!( K2DZ<95FZtNXjB͍FaVl?)5۷zTv khy}[Zx7`[Umߖom'4D v`jB>cpޮC)c⒯qDZMFcjPyFkW#Wz!9]u+Չ֟돠ҥne(m '62z `<^kޚ4 (`̮_w>;?e^?fc^?fzdpYbD c|n6jFQ.rl}l`6Q H3"~м+ VYaaEz!tNJ%t&~v!6+K՞|oyhEc{n ~7AnU|˦~g%#~-v1e^o죉ٌH=:Jb D,}W lDo8s$ǀeJN}-+ү+ү_+o_69x@t3r6Q r D ow!y;Qkܑdp9x(tE$U&P'.6.vXϲ&8 M/1;ozĠ G!r! b,f $^-}鿬/]M\Z$cJr?$cJr\6X+o%')q%AyA25 $0J6NT\;wD= ~Rњ Aĝo]#7a>}"b&$dPeBPjHP1%=(F `bXIj#Up0LEj,!%+s0[xH(8#@ae}cBX)1y8v 9n8y X`_UYjsd#a-;!jXC|VAѤ2jaT_`T9W~cQBO-%*\ *˃T`;-19F"jTs/ B4G0l`/"`RO T1DbHD .5p,x6̏lbDžEBcT;n T׳irD/%o,_Rcs5-)RO{t ZAO?~|~5ԓaa0't;C3--v=[A$rc4S{ A!s7HmE* PLfWeߧ@1%"HĻel"Ζ/72rGIv⅃r%|oWwIf0@ F2Vn2 8 A Mkղ7t`hSVHرw3iz.&wD7T u̹DTs" EG)~\*L9c#M`Y .Pe>\vsLt]O(wDQ : f htsQJ0)xISC)Sh.AJRK㨥mۛLs%v#R^ͪ"Tc,PG'y VBJVGVQ#) 6'HN-L^t?)ъjP4/1 T7؃s  'g򠚴CJ[VV2vtĢ"L5mKMق @C$lӍ~K00TU0ʒU.PºB`^Q "*PFpuQ0x+p9&l%W(ōEk %oe$Vu[JNX`-Z* K%Cb(nDgy|"RZѠDWYf?vw\Z}{:Ra٣QP:o_c }24႔z@։N^Ŭ7D(No 5*{o!G@VôX>\wK:ofs4^X_!1]xԆpmi4oƴw]Q fun8u%Uȏeo1M$7?Mh{ <9LpV$M BOE)_w .@LO9ۂYT3iR{A 8`;ܫ]uUwfYӳZhH,>u#1vjwFLr /kY[0v;?SVf1XL3(ƃv̱t`iCl Ensتlv58[GcvQ )%ij@VaJ&|5A2yCƏ u0c_;`s܄ V4df0yɞrU ZJ!%EC1e470ㅐ6[wݱޅ_2K_݀~LF30="Ds;$͆R )[)1s&="\~'zϙTa7FyL už G#Ǡ8:c$ΰN[&r(1:vp哑钨I<!42 7eHѝq+U }+.;޹pd_)pj@M%Y9SJ\fk7?ZZODPGPgZܜޠHJQ>bm6UbJ?E[^aAB.4E!c,7Ҁ3ʜ̹ v.'e$4 9 q$mOE:ppSyrUaQJCv=::KZNePD@vϓ6ҲO 0hSɰdm,qkJjuINqcsi,Q#&uï #Hk94㸩; P5uE$ubi.H=Wt mDp?aݢr p/T oz!*0p^Vb&8s'}J i|+顸M.6 cM;Rsʫj%X*B:BfANj4,wmK_!!C>,l'6ocěK%j)*%pR6#tw*vσ?~TAחۤ4gR3-1Pa2s#\a >L]r6ҐFC~ a4eClf u(4#ڣSHsnD&1A`5d@lJpԙW XGIh6t=3~iʕL(lj ~oE %$18JhN`n8JRaZ|R#`1$QUJ2%*u IZהG5(8r;2$&l #/% 0ѯFjaٯ*  Dt>B(EJ4{E#'aV@-aRw;ʃ95rM>Yu;c8zt?zMIRLh]uIJ"Xn5|ߩ{c|L#X#A <l*6|K&TBؐ0`!snm*$e1QGo`a=ُ ~c*M^w$JLμwINVrDU:33N)-1p-ժ+_,XPa%ل?$XJDH%8uT12{dn>;J"QTu: F Nů%hcrPhX ӂ\`Kĕ"{3|KAҁBwux$o ZŊ^0#5}d AS+x `ޒ [f7:SAl h F4H%f7cOKU$3v. ;b"*3ܒ <)xw?ջ7e+7xr(ku7#"Yu?MCo4{x / I?|VW=KGfܛqV_Ӑ?=Fn/>䙝O7upPgE^"3/7Ch =${ ?~OFeLWBf>Ska+$cM!<>2.Yǂ% EFJ?:բ}hr= "x]pKR(f206/L#X!%nʜ?I2eyɌsG -~Z*7e GP@xhbf{<#8Đg+hBǘ~>O>94ZAfN#yCbofz?j=s#l_$E.LmkP LVȏwmI@ߔW&=U FQNujP,I}h RU$ƻJ%q%iꔤ:%HlJ[$Д${8kKԺV\J| ʐ!:)RίFPU[#~ViPYܢe_iԢW_ z߂˼1]SP "P|6YWzsJc0*mnǣc'RCu<^(%EE/oC~KM0U(9SAd8"YdߺyWm<ֹN !Q5wZ͵s/a9G@J^0w?9+4B,n͵O5f֔Vn8U5!`tt}&]MF|[%NHDZ**n9s,ֺwf0{#- AI$c]neTڐ;swx_wE^^һ%R34,%a8!M~ ff7Ȧ'k e_ԁny.ûp6b¬gG .KgQ0SA({U`IiՅ3ǯT֑.T_{o|?GW1e0mGJ۸L0eCXFeLX)\F1*UoS[ULZ[g&O{P%[# DW 1=`E&`qGЋl BBB\hlD=/(@I*z΋ ;RZҖtBr)N%,5K]Ji kU`Ȼc1E⭡k+vJQa.l86OSώ2vƀ01`?,/p$# R5!цr!Er9<{?|(`F_#EgL^݁{.= .f2 HF!9\0bsDijEl'z0aò.E# !ZZł=:z<"%s!0A*hE]wZ]wrN W5S4zJaDݼY6YS:iSSH[Wo$$x@W}5"2nsASweq$|b=GDfЃw<0 z ?D٘nX}IU&j >2#3#"|šQcRpl(*٠-YmD֠]C$C9v{-ȓ7S&K`6ٻu 2gf2~)M <&KI 'be_X>;7EaU^-5,Myad] A4^WS]^ܲD{}믗5ןޞk4=k!#sqU$$y`?Wiwj$$j>~sF/j]5wȉc9Y+[p~|v ^,p<=/=_{vטQHcp:l'9iDVqY߃ (Ȅ0mFtZʈ V["D>hEH6(PT4͈+lw)gͻI^< ^A|b7{?$(OUy Uq*lg豶ѢGcJgYX {' G=Y4o85H _v%pŃ?Ye^ bK?_e+.ԨY̬ et3'.w9fFʦE.OOIn0O˟_,vk]Z̾u0ٷcV,0ܥ=D=x -#0NS"g-8ړ2c:9 ` E g='g{ 00vp^";Ӕ{>mn\vJV?N{v[;=]RAnI|v"_ҙrBa**ҷ{1p 唫G,$i&-k_"6UR.?wΣf]mn>/j WM퀑>Eˉ snae ^Z-9ecxl|OVT?>֎NH<]+]mU{Y>qn=΅ɋ*#,!&oζs [t&ݚL"<պN0:m+MF;;6?=dEt<481#Gv:懊j&rܛx~ :NOLSAiaoPc ˯ր;pu/ʾ*싶n7\*`D. P,`9."CdC6sաpußyOŧ$.bY}y\u̺P Z*|ch.6 QNљZa xUT}=23guEUT=wouQ`]T`Um a"_ɖ|ĤL VlI&Ȅ *_J>h*'3Go==5e c:z o7_)VXTv:P쨃X(ߑouwKH1sb*y1 ƻta@FO m5SM,L:/iD(4ʀ`d ;W@?bONGǼYsUTݢ {C +;C>н3t{-~,~;Ós+6:4VW/aSGQOgTɁ@ :}k`H>8a!l75K`KQQz*꘱$m 99`Icf幔T4- 3|_bSq+؈UP[ZIw*AkNA|쁘!fzqExg?Ӫ56Wv'&Vܟe%A .>F̬ʑHc90La`a!Q `(Q@ƄŅfqf E13O9/D(8$4⯹Ȓ !V$H03LjfY3A2ɤ(,JX";5<9T6l22ՂnAbJH.:A5 ]Rf#f~n GR(xcbPWV)6^|I0DZ!QL پolEݩz%DV[o_-f茞<ܶlEQAvN@ (nۢbrs-T\OֲRp?bcLD+ĸgJw3+ )`}n/=&,ė92 OnG˦8d3F/:OUO A~!w /;MWZXۼ'yr?nn?OWM?&E`5|mZG%6<1RRJj\O)}߮>W+B/ npRUQ$kBjfYbgg*QՠZl֑&iB\:|zTQB=8~eګz9䉰E#TNT<=jl)@O9L8vk= hL'={H :ϓכ;磯:}_ WǑH]͡V3/,J.0\Tap4TP^&h JcR-i- 4Ȫ.Qf؃96 wK|TRZPrj^"&iɬʰJ# kR '_XFP3:D'cR7} `I#.(Űm098 D5Q uP5)!Gb6 j*fc:ypP!5..u9PoaZvs62O\J?\Qe&Y6 PE#Gy{Hr0&'la/F|woK`1kNETa.*H0^En=%bbC7cļ:^~VV= / @pn8G! e~t[QwmshӤ//]=ko9+"YL0`%l5p-n%-v䌂 ݬYw c^H^ =3`=\C$9j9GIP?@*_԰.5N'o^ovA5"?a\Pf^MJSWC֝tqp =Y^wdI}<, s}J>cܭ _8EKxՍ2h+㟳V>|(b>~sܮl FԊǹLP[0248w\xt.0bEx?dnָs#oY#]O.eA 7L.fhI 00RN!+a/%8<ɨΊrYZb(x,jP_g,1'7Yc}:M"ZN,Wۣ&wȌh?9:2hok׌N[8H q*00֙>Q;Kר&Hvsȼ{G;od !|~ W7_ǂYd-=FǨw= GQb 4fbίRƼ#H~O.)Mk_,%yGw{_;Blz}L}$h_@Gט_X!KGRvJVLuoz׍8i\Qͺ(ä,l\k6cvsd MF-x5fUv^>&*U}kB 7Q&hK;)eB#/]5ٲ}pg27B)p4FFR؎wquNbn#+aje$1R[/ogrHw_ 9pQvS+@X"TH.#sy*yRU+s'=[K,|/f1.oO#w3n_ skqqVXGtNVV0>μ ٖyRd1&F|43)A.SQLG1uySL5}5 LIV*yƉG$E(+ȤU.N B&.U웻xkw$H3eڹwz F͖4~5, ȬؔI 9ld0PkI `2*2k)#{ˏR(.ޖ;c`Jϝ MsD}kĕvzPSz4¤&OH=b 3*HM|&O3S%X)ybwjnx5jUb>Pzt흚~i';Y2VG"1鱣n ÇQ7HeCڑ} 90ɼ,n~ SԲ&(J6/ !VgAKR/TvŶ8?1aשS]דضeo{ZѧE=cxGi9B~fQj֗s" 9ᵚ_F;>2aɃ݅I9141a-dඨ[rq℗%DfHm'cCOOY.IOE?brə3붬r2í<0_1H;x%|`ґMlRkw)yL5cxP]% X[MgW.ce{(j'@dfDرAVg7oe-|_#VLrD`Oԟ)uV[S:fF EB佻\ٔM+ˏ?Os=5bڄ~psWߒJYc"99@Ye: /T\Ƀu)%:W6Um޻wϽd}b3SezVoint4 ou~)LthG6|"TĽobɽv󨿝G5H N2m Ӆ,+hOk`BuaT`L."]($w7Fq1{64iA`Ӕ#3FE߻g;/XOӧWm~ rzw/{N|zt.n 8 'P{>ܭ{74Ssz~GڪЭ nio?79o06F?+,64$r!Ln& 5/ J0%蘔 C` Bj's%0S9_VFھnD~I}ľw#0e^Kyhc~9P7ʉ ,pr.NH2L)ڤ$|/IY{P&ڲX U$S h̢5~#aH =;lmz tXD![&*W:){ ϴH=*IȦvبYi6~ZF)N6Vۣw""Bخd){P[#=2 MF0Fgn>4:⡡o}Y3O ?I(0i+a wXo$lѡ V(H׭DAUN#]3Yȁ3ש QؖVR㻿CSΑ.'YBK~(ze(bDs(WVDD[ߞR\^7D|oVkUb)Ybv/FML^3r>PI~҆٫';ͤUVp0f$PF'$upё~l[;"SvG)K8sR=Y,lqψU!w ǴD?z 9 c|{*BU*۫߾G lS4r*:ST&[X&e#%ezqKWТ0MI;?t0#%NWTj]v(<@pzձ!#\@荒p*uVĔnK#x0QOU5x ҥ$xt%NgT_M,sғ-krFV嘒I Wpn<,9f'ސv% E+ x/9@n缼7jliϜ-%E98G_Ie%h! A"%nDkJɀ Nt.HJp U@(T2Hʪ s,JRoBMV#PiA(0RZtkXߛ'i΀Sˀ +Ruj*4*u{,$}NvIW$ !kW]h9e\X#lQv;Zі:q驌Hp* ^!{x3fs)Ϲ|[wVeUedWBz3aƸ89PV-k״l)g42HGHiha2yyN! p`͟kƫc6НolB I%+?zX6(rGVWlr/Σ; *Wb+03b@y.Cό鑵M֚8ZBC}dL|c4k{~G;"5c}GLgeXl:倣޷ hn?opjfU;7m͗S% essI?6ovmf9e~PĐ{6=+^,,vyG^ZE'svp⯌,^*?{лuuBd{`8"'5̵Ѕ_ [Rjm%!4KHiZ5԰TW |{3ۧaVdvYTf#3U V\[e"Qڲ.)PQ}RV+e@ 8;xN Piң9wCOB;,w3vwc^ݡ\< h~/9Tl$6siRi?C\-PZ(Ss.rʚstDŤl|yնj%"ABwU;A+m͇Mvުkn͝xBO̼-nO>HǓL\uVhmޝ\[|[??Η˛ Pc[hhܜ, ۇrĽJ]yҖVh$9⢱cE/⳯~y߻ŷ}Mϙ5X3 9Xs2cnٍ_nu!qL^1n vUH…pm={!I$|Ly@ϥ94% _8^>/9 (*{ڕF  (Gΐw tm6 \gq /pQ\`FmL!`"~m̦Eb-Eߝ6z-:ז=U ݛ-PLvumi;!# xQ)}m`/Ve oDJ#eإ}/YqrV eXA@*KѬCpi&ȶNւvFjXݒVM"_ QKL!lydI050-ƨ@!JP 2 l8ƌGLK4tCBN#tkՠZ.(tk 5FrDAЃ KpbPxH2`(ML"QG)ija!jJ` SdB1 HƥV%8#c‰TZK̊}?0 (28k2x+֋01ؽ ,I=Csrӕ-I8ڽ ?۽tu.ڽ%pTQӿk+/7:77P_#z\U*+X"qzYo9 ъAEE6,2H瓎FCk0}Ŕx󷧦C.v oN@BA $g!5#  W2>K_αaR]*ȬPLİI7b]vmvsr=9_:(OioN˩>]/}uo+-e{؏(,SBeD{Io|[8ŏbE!Fh7dѝ ?Qf^aI,;K6ߤrXzY2K0ތ<²!dSZO(fp /R0Ƹ6K=1{j36whiTg?+@.d4ht7_8;\%QڲO.nŢZ^cL*:L*op"!țk~i[iSkkSKI!˔. =I)dHFʆR_d㞶|3)e_H[u.4;NmHd/p==Sl%bxKoyNP?&XI*\OM;{5mr9ֽSPZA2q.q~Q:Z^E"Bg3뷹 9\]\dɜ%gn86*,rW9x?Kt}I)ARWy2Nۅ@/q02 f. v ėOʞqc GPD,Arm@]3[P|v!WPZw~'_n300*;e3WA/wB2˅7Y]TLF6X^4O s~f9(LMY.viNҍ0 Xg' #iEjh2*'DlbO궞$lFx6#NMm@{J b$#tٲ:=`Z#8\&0YǜBN4`ntKXDDI-k<0xwppn0X7W_.:Ѳ19j& MbYH_:)*xOfZGqys$QH* t$Eҏ]837b=eق-2Z[(k" w<^U. r\G窰s{x.F0e޻>+T/[8}> VB kۗ_OǰpѰ8[;t8#`yDŽvJ6[x`m?Dla+{Lz@Y.ƳNXf?GC^;jYc65<{)IVK`9cHBmmRlZgȷH̖Xto9Dla/kfyQ.cFQrVl1ezʚ'G^׏i7Upd}SWRcxg6(hI^WexϹ,OκN"$re&)u `21 >Z 1]^c.L?v>&M=ic4M&"t2_]ܦN)[/iFS19,z3cUI`UF3)gQ<f cN"t"L0Y{jO9K;Cx'eשQ1 Vou0:j0]gD.޽yOꀤh4nk4-`7Sid׺d;,BTl~oC1!̩٩C ۤ^'}jA#K܃~}(z(O RlvU=@MV9MW 'GjZ76(Zocce hS[n:%@6m>MmȺkh/QQUL/]g<ʨ#b`} ).l1kK866qA , IojqHBhy\~ 5QT[XwMtUPuܚ[ 43-ZP`y^̫ P5RT n r?p-N \S.焋nO`]p-> _cs/+]E*ץf_ Xqiy:G ?s=d;;cQ/2FɌ&(cVݝWNisOfLLzND(*7Z F.2,TK$e(Di4Jfv>@/^42擳띃Y/aS:-Y(J 84[zs7F:d i)ك~@U#6>>~ ŝc\ ԧuvLcNMK݁>֨t֎8Fx`XƜiVS㉋c i]uF9¥<7` `&o.j,0|rR?|SuR[U]F:GCr0+Ld|%Q i8eFc-6jI)9iBNz fO5Č0\*V `O =ZfN {guagRKK^sԏW(ԛS Jv-nlK zM۶JbD}9U˽9nb @$ "5Ț$k["_Km(֪..»6-W;|]h887 \ LWd۴$`F ƓA <ްgkddԥdQn}ZKEkffWvVed뀡lN[F BhA/ޟM@(*p90EX?[?M5dvLVQ<8+?Z2zM0.\f|^ HeJw3k! n7r˙1+O4B{5H>/Qbi# 6m=Uŭ&Eb[39bɲPL2i4}ON[=x 6WX6i>swa9wNLkHf{F}$>{wSn}Meq!$FBJJt!1UaDp!ɉhc U׺׺ JאWp3݈<5мWXs(DRJULb%>8'+i2 Zxj3 EosU*5" 6yqr(rA;Γa$v%~uC 'ucy#njDMhm܈lhe(' 1C#4pa0p ;jheC[wZ5γ&oyž(zD$e9Yy#NF W=(-W<HTVP0b%pe|! ˈ- x])^į )O@PmsgJI:&"#!*PqAW* LE08e) 8b'/(F,jYE3 `BfDT^poF `3Y̨QyUIBc\0.1 EA{pFB B- -6^\y-0{FpDu5P &(SPC#QHLoGHp0­.oIFܮ/). bPE@ xCy)Yc.lrr\r0M)`:4r{Hq=n*n*햋C :pYD \QB V\r ƀcgKl\ zL0|u#&on c5Q/ySGՋrMr!rc1(3L-8Gh4 c9f!HAy//A2 .mm0`G(_u0h0$T'J)Qk@@!A1jUD" X!ZJ@aHĜBI`^.xRG&v ZTq4ecH$L?5} ba'vs񲰆so͇4cM7j\o|jj+M4)7qxC# Y9wc N;l & N >^ojؙOExڿFaq9`*^CX0x7.gC.3 {#`OL>/n.^g 仐BO3st%88욭Ǘip02'lKe6ҝ @ffqy_[~G,+lGǭ~Ds c>v^E z^ߊM=ęf1I, Hide œ@ uUbq621Fb kJ *Q À;M(<>Ri>` Nf5~p4U䏝+`)p鏝[zz:CvV }81D2jE ҀyFƤ% Rjl@3NLR`vU#f1KՈYFՈU[~~S]:l8{&l+[sɴNF J|)k~&E1Ǔ5%ɵΉ@hfl$M2\J2,۹~?pGf% 56ʳAG5K'> *!s<m4PM%ڙzT`IWXq.RLi4AŴ0P8v"odyK-ɳ, )h08n~9bO)fy` $IJE@W' +)X̧E;*tJQ9N)ٖ oE.R6Z2]|f];@ ifPc Am`(ty-ЇA 8cWnz]@Gb;#l[, xk-|Ֆ/*D^x"FصK>Zkz>['PzІٺg\1ڋїͬNMvmc~ `r(@k`{[ Vm{3[4mz3գƯI0gzo;ۣPÛڡOi^ jl9K9/`Ճ={i=x'G-{#pmưfxd* 5p_""hn|1ڹqHH).0>eB21BycQץ~Aou_u_GKk8Cj<#gx`d-RV+뼧: ?.Z/eV(c]95ٙhazM͙gWلSn1s`Xj&uXلd!NAUbOD8R>ķ&/Q (G:n(FJBs"-x@t4p !!JZLHd0,$4=!TR-!~ Ppp2 AiC 6І&6'>P4uPe 0H X1cK"BVs.2h12SGю QGγSG}:@ :ƈ+X+k6zz^| }MCX`D\:HD} 'UKSs25J45\7gj^g]MGv/8lCPCKW6|hI *c:3%xovk.Z#K[ok*2Poy&cEqqLˊQmce+d0Dq``%}}ЕcTaF# 1,v&ޜK˕|6oZ$5HBBUyg:Sc1lL$6"~BN7P($HsmpAELG9U=J!/ kH[좑&t+Es] x`eV0\hBM3ۏxC[`uM0JShB Oq^Ghy i:͡~\+,EqzW/r>ۨ2oY4lg#|=#|3I1vu3z{ Ԉ@G `Ad , Ɣ5EiGFĒ F)rTzQ@Q#La$&7Sk_2*Xa-@I@giQcD04eG-9V3rmoUFaQ\?na lMu ~ցJd!\։2q(aSއ0pT9>;Mv)t71uٗGwf?-ͦYYg:zj9=;Xع ӆ–yzNQOo3IM u+I ݊ #t!N(Qa,v& O4t r͙K9-.E(Hb]s,g]eK 2KA`}N\#)Wg@R9pVX% $wAJ?kRt.,-Bt)gI"2"!!EfONOxLtZfw "#/ٱR*kؽOרaM% I(J"l>d)0FHǭ,~s傐Dz|ZczկIԓA+OĻ ) W2"Z&2Rx888ފ>6UPLY t3o%mhʷeZ#`&-IBvʼLY"H,إ<+-t'k`Coq\Ja m[W, qK>X <7@^c7 r.[ne)c!e<~QdFCģ䙗FţX.8:JT нN)yQe)>Q_Ve(_6Ϝ9b& Kd\J*΅s._)1n @ ػW_sl` χPv8ZUAtycپs\#uD}lF3K@a1K3&{Cc36rƋJm]+v`ry,cq8h@ "> @(MFa!}T~,ɽvovdROv9MSMDpюPuX4נ{¨k c_ZƏyo nn/h#Ϗ_RZxG1ضB;VNZ,W0G wp@v^?7N:9xWu&k%_~:{䙫i]} y왛6.Wg'ߝr^o3::{}x]y+xkoWfϛ 7aؽUC٣?*w \UNM\ٺ>QM;]K*D 6lol?xU7D(F9be7[~ƸwchJ dadYMu>0?h{SumΚKo >Z?/W'U}*06 ~^Lz#nпlU{>['Lzݳoon.ߘAIh{&HXUux SϹuln!9rpy}y Nlo._F}n~ԴovkWo@,v;Kkv5-HN+6Js3[zcՎ䛞$Cplnn8N(&vތM-lwzwxM/<"V3V;uMXF8CXk/n _]?^'==?O?0/{Ϡ|2QIfE)}c߰Y%wuWN1@?OI2V»Nį{כ^1{X:_]|OݓHHƻ#on;kn߮[%v#٘ zh;{n4 7[z쓣vn"l;yVIY2K͹ 8/j,3v3m!1YؿjV͚岤T u䫩*hrGJ>*͂t42$ s`l1Id饱/hO&khy ܼ4vg=gb ., aQil 5*u uv糿`eAL ,82Ɨ"!HPLX̱bQ)JOV SڃJ{s_―Tڃo.NZ&W%hdi^(%/A/7eDi0 S!#)c? CPr}jʂydL3c^%(?9~4S |KR,b{ïghŘ)p '@ONs[̞y$ 9^JD[{aVT>gBn# C 86QW̘E@zk)ctL}w=(c<.7wL,UoC"!TĽw_U X)1@XyOE{<Σ? JUW|o\`,[ᾟ&BW) !? 2v6īd3U"6jR)Qb+ 1k-/J?֤:uՌ w>G3}4Yn9M?/o@J&t1PJns0,X- Վx(| O(f0|X&G8D]'Fs 5VxHjİV\yI]D~d)xgv:7GOG"% O`d1y vm{|tz>):SUlŪC~:z:^s@(-BKh:yh,]w aq_Ic̿oz`MO dlY\}1kQG'`n.!o}"֣}oN7Q$e /W:ck=i J۷/ ٕ!NfSPqzr0/u.h}~CP@tk}N IN RěC]籤!򡅁?ݮx%`.Ж!"`slETiNș]hbkń5f)X ;W !WT_EpɊC{xg (9ToՌ(x%АEgQf茧DCPk4Rڐ, !]ր͕P SxhΖTojc`5Mže/-Dw]^-@Y>ߖ5]k_4;^4Us3ۥEE- e_%Z*l!DrVe&l{ʪs.;j b ̿REB4Ϯ;(KKmIзK4 AHOM9 gMRcċxR@F$a0`lۉ1'9oI^wrշSSRRO.߅u|䡟2\;Hm4wlC̘Z<ZP)JxՔSLDP.lsFa]__|T@B̉ƯaFD0`M`pq08JihfK(dRP }MRm&kD]E1K]U/[C ~ϗO=X6ܐ]gMzq.wT|a. %`E-iPǣ)C#Jxq5\,'C!ʈ Fl|5ʍt%.jcEPFAͧn~la;{z@?nז&(PnRg9Ufߒ<|Qt.0Hipqnp*5N"f+_QJi#Ɯ  CŲ0*KPC_q>^ F x|T6mTAhp#gjÅ4 d5٠Q(?+R+! 0;ۉ1ѻfxdbaeqYʖƛw?]9<0! FIYnD;FU}%VLdkP(V58EVcۉ5^bĘ8>^ [DZa{L1BwtMX97#mF Όm(;ɝerj0Vϫ+o"o:"gU;5& dA#ba~/op=k\Rux%VI4\qܱ<67<̫)_|FWDC%A_Wa|e.B=69A6_rY&q\~^Kv;3_PaM9c3x~x}@-XEfp3蛆M;C Ԟt;5fcɠ16N߶猇ܩ˦&j៳nvT*-eȳP'ӜI=WD|P>vj̀v({3p }zP̀警x<4j!lHB-i灲WiFzDB۟qEntb:[]J/)KWu}hKûoCĤunJyn&5vЩo \{TJu|&5TǾ`ou#xohϾC Q9<&ڻQag$>Ԩil΁B #}:7Z$ JltAz@r iHZ5MFYod̪C=†1j3`ypЪ0e&QG ,30zPvu5{~䘠р_gۢ^iH*T4GZ~kL|[dI=Ğ2  X7cQc08R)8 qT?uMJ1d8Uc,<2L1ΉWSD.j]]xjȽ`m ݌lw*a|hL,8I#Eh 2Fz j61ys@OSuD?g,;}AN'?0v'K>Cͳ` Զl(^RM:5$Ix~Rߘ%*p|wŹ\&1Ϯi Y-s~yWo<즩oUaeg(~O#:jW~ko>oOʭ6&7#۪iɏogmI܏ߞxqrOƸ,.k{ޖ۾>F=\!],QՐw2{lY:~Q@TKSyO" p8"$kElF/׾VLjf*$J[z&W>>6S_;DM}hhzaԜ==n56* ֚8œ{ޡ|ҝ_0!jc%A`h\ (}6KL?YmLj &z_E_#V jXf^Q?ԧ^/,-rnT@KlFұؚMiJ 5fiGF|`B#BB̞0Fn&Rb*mPEk#k"f 8z'))߽/o'I>7ଲyqNjzeg쐥`\&3H^U ځvlB*l-qB ;PbɥV?9 x>iWƒhEjçOwk ۖՀtH@U jOe#nlfF(%T ٚ-ZZr6[qhE)8qˀhvuxe)Cɭyn~X?O98ńAHg\ȴSN].{[kIcQQLJfޚz˒A؛9+U(%AK `?<{UPTAc눔)gD? =+1X&)p8[(Jz/&g3Wv3䟡Eȑ$[xcW6G4e7yk-jd=1Glq9S"|4P3C4 Wh*?C4%IW:C4=y|63֙ gwz˫/#ϗRؗ4RXv5H)(<ːgZưII=DU.\cؘC:g@ M@4'B5 {(6ܾw{1fzD4=RkL"(V#h1-4sXUFΫ7Ugi}v7lG׆3] XqFXQcFZnI#gB|l/_I'5++Ww<ԤvOs_vy:^UtRw[M5TV%] kȺZ=[3qxz~nWUqR]\Ow!vS/ٿ`Ęg4?EDY|ֻ]\UMO!,;˖NǟNneIW~h:Qur\B'`z#_)`*cؙ@u,~\K{qt e?~'dp5{]?{,xkM)Ȣѳ8(Zxr7ɅnJow#Ypg9娽l^/OΫ(4 6} $EP &Pҋ){NH7dޑ[;'gaQ]@Hf&Ā-hiu3cܮ; ŃVrd;lkEɅm=2B߹՜V;{Ah+Va1)r  DZL˚:Y[8(eM8UAJ8TZ"ND729A&|^ɀb0[yb)Иd$5c?AJ&s7/& ycIXİA#)>K4{A2,""w@^Jݐ)}ф>|x OOUn_eXamyeA,VT[/u6N9o-=gl P ֫0+: y;1Rz;4%S9.MӁHIs+ m0v={~aN %SsuהDq{!) AZNNK* KIRvA<̓@> b:O\%`k+-)u9q1|'`z~. KiBFi@n=dx 9Ôlx jPkBE ^MbtD K+4$ EE/åÆL2Cցu!ZqB\a>MF upVuEDich`z{isd`ON.*!)Ⱥxz˚ ̩.1CƜ,QN x[d1 n/Ɋ3mwNIe~jͦTc^1N&V]%yiBD`ؗJR+x,:Irpxh/ߟz?{ ]Mu\Ek!ݯ g35_r׳C/_nO??@\_';IOHK%HBJnӓKq4W\g6ヲ~nr)nO`I0%fg 1ygw:E8Oý~CCbLʄa_&갌0ȞpaLOK{e AIPyJ,xA@+Wm@RS6.o/ś˷v3[)Y^g l1CX@R[uC& `(GDv:浇JrP_ɧct z.\WJIad*,Q'zˮN=HS&q7ZIJԨJ$e6Y FCjtNeM:V|u* OdMU ?zj6e­ rc;+F`ZP8g>BHdt4(6T:@q۷WFk\˙PF \F"e |A"/^ )M!Gx5BT'Jfm4VB=J0U5U<$PddeSvXJ1VL4$# &; i/)l4IL0\Z'KX+6]ve9*Ձ1nkrʮ{|7S-d:0-sцz`L"_oSd7yN&"fCtIȖDDΕz3HTh<<άM+oau~xx\ref'>MC7z7~݃#_,/.B?Y?}C1~xnH[ 6?(c=s$<ǫG)~>ƶfv\[|sCu6G[ŧ1kz(Wyigu{fG^g-)~vfmZ3R(|]l(Hl\~ؑiА\EԌr<蘐ۍ9O"ptgΤ1'A>Dp$rfB$%T<1sްxd%sK.&=P/߿/&ɍU?]_~؞y9E9)^Xvu?yA6pc}=:}5ګӦo&vE"R$e9w%b׳v,E yuџ="QوH(-_g!bW, *|aHZ*٦}0&}~{[ \) !mZH|n"1jI" Q/?eԫ7ǥ\_U v}^/tM>$?wx$SoLOL>RR*|!|dYb)2!xeIa&Zʜu"!-vMQIVGLnT9R&XJ8(#&OA;ɬf(&`Y tdׁT'[1m "&ɬȹ`Ύ-IHR<;@S-5uxr+'l_2a[{k9|2@5maDNv:W6Nf"P{Tr%ZVfjL̮FY?F3jsQD cXl*Ak1%Pz57 }ZU|2;5mPxˤ?&<鼬& DU]"d


Rc([cVzQ ~v Q$Ü1%4њ%3Q։)HNBH R!%EHIIFP!-TiHb}1 d` )w_') "9iRR=oӤ fc2U 5CWЉB 4drlN'AcDWH`$FR8rd8WHeZQsiɁ'/3 $"%ǿm A\: p^lmi֕T2E !1k++ZR;c'[_IZO'V *,ۗvKZ~9MQFs9>0ƫۂFxSPIFxR91Q֫ŝKʆzSNtݫL~68yu&r3/KOȈfDJa3Tx+SIs@]R kk",Aih0FL ʊ@Ln,Rq\*-0>'ZX>sjeyR)Qz"8Gllu$YԜVgv bl"ӅWPfOiA8 UTP N}#"⅕50kԏN,|&۪XJsZ$̸:R#JNXg z@{,GT 6lU۪&2, SOE@žu!zʋyH,Y>Ɂódѣ1ug{RÃT6o|s>=e3cR}]J!{|88E5 QNhXvh jF~q,yIR(&tL 8C<s$6ĭʀդ]|_289/|j묣‱O:Ԝ.9@Jc3UQo_?tzUeEF(bgBJKl^"eV!=%KkHnM7)&[MgL>] R!Ԡr wOb" 6$ $³c$Rez)ȵ =p'`l2~L s*mP$Ǒa:HZ`9FאEZaҙt_fC?ˇY^}A"eD1H DcjIe*+Q+lUQhv[ DL8C_mQtv~4WSlL1fQf0X͕;6"7TrT-Uhs)0Kp6K)( \Ҷ4vq3p6*6WSܖPa_@L6xY[E1w5V`:n;n2p*Z+Vm|M4i>47gη|ZR0I<-N0 Ov %iʾ+1cĹCNS^{hM(Z {sG! !‘k,%pAS]l ~ 2Ca j, |i%T .6Ps( RX縠vĩ~ݸ7T_1,X q=\a rT%qAqh QMf@rkm&h-/%K[Q]3`Z\KK-jJke:tTC4(n䷓HM'/D.߃_,,Q97gv%Hx)V ʕ,V]6APv~{s8-e!J? KSrMm 5DdTTc՝";,_~ziCE tnRvKj9SmXrc^-ג).z1n!az g<~3.>:R٥%a!@k3Psv=z}5QyJXm%i[Vj*'k@B4yuqJWj.^b2v1JM xʍ]ROknbQe_IѾZ]zF ~j!d}v[敍p<+. KethEYy ?\M g> :d5vz46 Z_,?6W1?}->[~i+[߸Y-gf,6i``/p/h W%2-e;jsK=kW\܎D!|?$v2(\IsOʭtX$raˬ\gi<<53*FJ^NYC1d (D1z'f O6*'ocfSIe"fDIXwwjt>߹Mzid@|btzl$clycS4D)tJf \M`Y V:t2gZ] Cֺ{g&ϳn4|Iʼn&rگ?sax tӧtG|EtI\=ӍQ#!+Ylqxˤbi@Fe,b,d*K=V2?B:<99br)3ת H;^gהA 7}@I[\L:Sc,iL*-+mDyD=t>:9tG37L;.V OZ/, 'k_h\ xb-`{m-JpQ+q1 /&NH|?ӑ ٞ>.:LZ|CZ@j#%B-ֱO}t(@LT8Ee-+RL-2}D"RS~8J1%>'NSl}XLJE/݇ GE8>%DwϚ"+YdTM (MohFA? -> j]hwdtn5؊ ^?S >|.ǷᛏU>?U3;|i˵*}}}HypBQ/ڄ~߆fB#u]H7euHsdyNlB  ,4K),)*b$oƓ\xuECr1@E*Dκh`bi>4 ا>7>HTMO?|ggK ``RҨD(O!LO fp@[+̎S-1omБ86Poqcўߤnt# W6DaasT4bpJ!aRr&΂q -C$PBPΥ;5vʎ-(%21E`W߮5p $ig`5MProzEK17<6?x.Kdxdl|De&Ym&y?b-ml=BbDSZd8 q;5o{rVN!<.'iã'A|-.O/ᷧa5}pppzy"VŠGFf󅅲`Yx>e@1i>_Ll)J62w8LP_DB"6r1L`ty(mr\eIL"PWZ%V?/DVco`5F9IaK|-d5FUB2 sLDdj- lT4V}8݃6' 뉺A8AAo  O?G5sЎ0vpv]ΓDv-7"vrp#:;H^-&Ki2.1L:~qZ \M0]]~8vQoP8Dķӽw$؟͌$sຫDӉ3n9ͧ#=kr@ۼS9ِi4%$g@;hvh=BdA02rCyr+H=}lfAu >&A!NFUI$@FHA"igs[5O#1C1jGF ` OS( Hs`9d#P+BB i$vԈFiOd,0&-;%a'(z$ABRa:jHf\jӝec5\K@Iw)0޸hD;'ڡ?'ڪq C*܌3)ř6+AR0 I(+lܛvȸ7Lү; -kn˥d꾮7 -o @X咜C/CͤRV6R=~AIRg $}RZkrSQ$sYFav-E!4sNsr xJg$g>#qcgi6+)D[e1igydB~oz5('7yNFs U;WkToyZ{' ׋`;سRu!Íei.a5V ޖEeB@ڀu"…џ-:D$ NmivpJs{\Mӛ0NaRpp}~>OfYc#7[™FwmI_尋#ݏ~,,d?.OYM~դdDRp3!ENwׯ^]]`k{qU(mw՗۬3z7wZi҈=YGw;M\);p :߄˛ 1F cHr:jle`֌,wr1XeDᝊb!9:5\W=)h3"Uµb{fcʽR~\/H e]%F|{!EzO|6LhDŽQڒdV9 ew{!48>K "b qF" ص6tvyzsG?xu{w>ayzKL7V3~ز@\1@f2`"yZ, iﺄtR Sdߵx|{=RfKr$@[MA!AfvH'}-ܿؼ5,=#uW޻-3|& Mlb+ <6z+udDBթ6jcTRm`霢Ocכ:0;E]'Z:?~̿~g]?FU"g|/Rxe zX򌹚_eMǿqNO,gev7͛MI0s9Lx `4"116;A% o>\\yw1wY גwzp:At ð23uC)>@EuM;@zk | ct1 ö@NJA?y=yQb1Oj5'itR-龀89m npk\@OI_ظyueDͩP->D[H &-5^OδPQJ*Acu1++~˱moh"Ei}ROJ3Ii>i6sqgU?KAX)dBFVk'X`ʙ.ZuP:x@9+K3»տ)aR\Iq&u4]&N$&0C'sA(d \BA`2ʈ3~=).oWLDgtVq>{lM3oylj5_x/޿{|%ܤ~~"K}~{SsRd>)[>=cO\ۻ^No?b{yIϗ .[ah }Q}KJ{,WĄǪh_F6T&HT#)N4хo8љPQ/l6tZԬyF;ף%CNO .m\ M#d`CuUfce DCK!Ƞs+$ ZkL@2(rB"1<?,%KtF$Ib)G$dC"ͽbI&|Hhb(HFR "g$ç?;='IL^yEA==;4kX`D{̀PdexS&k%2άcN2P%!^X}'I7ɸt}/e& `gE#g+B6Xk,~ƶ&F1Ye YfdFrc+"jKe&za~(h1%O]RMv0"F{uX!16+fSnp|#"qƂ#䇈6 Ȭ-r\s4m!+W He/PIcHT# =-7zq-N1 9f%%;P Dk-M״a@!5ĄKhUmy[mpģA.ӭ%.}) yɀ,HJ+nӻO7i,?]3.I>";\(f4WR& YI6 &2%N{&d@&Y/L[64/ 1WO~dZN~duL>0<0yU|.[QE.Fϣe&HFVg4PY8ɧ+ɧI+fG] !zA~ԂX-`LЅ<)Uđ*z;%@ M&a9 GZFK!prv ,'BΒ a %2v/dl<-m]D#/g7W1e]H`80ֻi1,'~td$N9֑`i+xMZ@ŨDֆc&9&uDR,L6Yi2Qtyl!WQp|̀-b]L0[{k d #ۘŠ\e~RKTk d :i|-DdK_*"2TJNp"&q~{_~Lom9nsVr-56_OY%&sE]V&~˒}o2|\>ȼIzj?нd`%CJm,+y~M])PBq:/Q@9ӵkYj/_p/}S[}b38~IԞ]Rb|oΟ'w3uяf+;_ϮpP;AaIX;c6LwKyKM iVxRGPTa#-6:6׌J|;E2UzvP/Y(;Sۊ1<`*>PRҬگ'1|A-;mKc$BgXU3S\'+a F[h#Rl'_zǴ&(EU:-.^Xc{*"% =Ց a;XUh̽^D$/,B F| F-XɅ"b@["͘kZ|±FFj_fp mx1^ώ,w j>9yTs$n΂JFq#ƴ+,8?H:$ٛ$cٽbRHMCb4Ad]_.?:iV)Cxpzu)P+!,^<3Px!O3h^m+>;mb͌g@BrV}9O"FX;cHEcG 4dy$" &;h`az5@76~eV!YIDAGu^![e?N(wI`OvNHF ag$+0bȵ\5av|"b @c%Q[r6yyZ0R(+S| _8nCH"*ܟS"ir,JdU61 A!"SIvh4}C$d@EE>{.Yi5G,&%kR&%kLj6A U&g^i9&g3&C2 [꫱CLB.X̢@Yt|cQJ[.̠#~p&™ gR,Iy֬+:L(S'\̴,[M!{x'<7k8Pf-E@QҥݚTv-*cB9h+R,(4![\aj^KHHbz4 ;nv&꿻0yȤ\0jN$ѭ7k{x*E,ʨ=sytZ,ZͣP–Q(שUEu3u;(h\BVLN'MZ>߭RLe )aZΕa9//d.?L/^Aҏ֍8AWHu~4+qP_6 Jm/xz%5;䗍1eGvTYݬ/'XhYLpM/ˇa~uYL1y D[tiCS:4hQώ6;g\ڑw^]5Kn>*'Xnƹ(4']J%PH}сA,=Y-:tʖ@`ƪX"(4,YJș&d [I/g-~1 ,67dd ٶ6Hs-dužir]X*8o NRRؠ15ec)4SZ}*!!HڽVȦaa5WZ(pv R_sdٚE 'B1ĵ5CpE…VaQhJVH'3pn 5w *MɋOiD*ym ρ;=^j (Q#ЖXFZr- * !jR@,w@`'  V*-ASX4KP}<7Z]rvx-;elaϣLtg;p עQZl.8=üY0fcN3a; nrxxrNeOY+8EWk4JbTl BX #8\FiZʹq1V`Y/&\@A1-jmRNS0x]6]a~27J_XJ+2Ʀ`3jQlNꥆX@+&8wp V,/bȮU}T&]f͕glpH@"([ A:mF4g-pDc H'‰bd2J MHLA vI> f6:ƒkS=>zzsHQ/y 0 K__B!O  ێ X=n/P-F°A"U{6})* ?@qFTHEJ z'4PJ.Nfa)F2 hRoГ&STj%{4Ukո2@H!KF !LCQ)IBPk̄f;J R 'e߈ 2P| rę|A1ѯDY熠bdY8%'Hp%'AFѺHQfc7-,FEWd\^Ƿr[+Z bSv]!62s<-%;mQ9|YPb3~Y ]tI>oa&&g3B>Ng{,iˣl{A-p%튃pGVoB˽T㲀I;خ}qAkbx?PB]ޣ R.*Bd(rԘsO /ndT_QJsGdR@Qpk v,J'܎V&W4[ fK[Z"4 JW&MB˦0rL2ai([+;lGژ#25=uIEmjej(4#:A.3Bƚs$2LbF06u,3 5Gxe76'0yvT+?p14*>?N*Cs(Γw?\= x bG,< ^rOd6~zpK؎V^jB*qm"'O2EsՋ|Ҹ3 %oG>y܍w#H t*{`Ҽf~bZZο q_ʜ̣8q9GO 2)f`4^h:AeHU}wӜأ^|{~M -F0xL9 lDxD;i 3I {kE$4w(JAnӽ><=" ːp껉w9.-(7y :oKm@M!xGYʖ }D @C{.D{~N32w0LJl~Nsx~\Rc2s%iWM48$Ԭ"A4V]λ9xطW}SV} =|O1WB("v .yb[y2W-| Z>|0 %IV$@2T-÷<8w뤫aTCחKuPRr)x]xp6k\\a)"˧v?UB4N-Ƶd'MVz|֢yt[|qlA\s4b oUtL L껃6zz]+sn$9s㨊|&NeC 5qv[+Di蔵V6TZTEꤞ:ɱH%:]rՆn34L\i ,2wAd.lr?O]b35v|?^|ɚx6>:N]ÐHyޒ)b[BJxykuV׸{O\PUx٬ԍ"IQI{IYEpvgeYqk,쩈ˈ <"0rMm7h["HgsH Yti.T|\H:5 BR0v _OsiRGgXXR?H4 b._BR=j@X"tN5Wy t"_~ž&/9D élǤc1ML,yʈs(M uFSJ΁jOm|+V㹝-jҙ}r%0K38_vO*V%~B'Tb'z?ҕ9??#{?\ >z?•9Vz>|h<¨3^Dž[8F߭j{ep?SWyp.Bu'VǷq v}@ƏE;.\pcpTUmJ!ɝV NH" #J U&Qbƚ♃7ACC5YKSQ`4q6<#E*iDSRcqp^&}-մi mNh\9%1ʼnA/p1M, #F5N!$IdjsB{ҼJrUQu0*ם4`"}˂k,WB!9E/=}cA9ۃleU6hg2Upuv"ͭ[@־"d-/ =6TtH>/s&}4AUj{BTσ}I95}7/?f5"ˍF .B8JBי\goc &# anAlM~nb8dD0$4ՆkA0$6,~ 7%t >JQsXZ. F~ק?|U_y#Vb6DTiq0a|o9l~'³=|eߤR$2` ⌕\]Iަo,Yۊ 2hqsT\n+HnXw$'9$׻lY-a2# K6JbAUL*a&ɀݳJ({ſD$iٟ~6C.p%Ⱥ> \7v͓#}̷ +onݣqy\dgs~tO TKc+2í*BqSR'ֻoI.s<\f(y_^ɰQ77_~zKVdv=C&!r1\6/O!+Lɒ-O|wlfrF|T~~#B%"V3kX#c)G%$Iԩ)YbkGk3~hchRԇ K_mԱ68;A/BOܻd4t2\Ztj-+xlRTD©Հ%㢈\T'f);TEⳐcz8ܢXr+x+,П=bQ7p'h{&~8v<C/7^@o 'QJJwڲT(Jc&A 3ZX*R4Jib&|@D^} FA.ZJ 󿭚ۍPIOgEczD]F,wy܄xx&]wɸth\gazo@KEqfnuF& _|ĸc;KS*v8GmɥI{};Ij-gRw\zi1}Z\]|{h"_*Ιŝ``OOd<{6y_iҹܴII*$WPne-;I], Vk(ނf-kEF ~2JI,=RdO=#Ai,{a2M@**IFm5-4")[ւɠ91Œ9THy̴j zU 2R\l02b \tї7y;3֎E8Ǚ>gxv.,>M@ybc<SVG{_\X퉍0F<߶dKQPܥ~ N"j/'BuJ(њͯq?L@/$ ctg-uT ALTX8opac,5GD;f"%:1Ղ`dc$N1^IށVRF"-6% #`MW]DP Uޕ6#ٿ"`0+ 3XleFUHru.odJRE*/RWmS{С lx,Y%kny4χM;1|*uV@?6N&d Flzc^.w<9;lxRHkQm¢p3=zyl@-V)mN-ǴtG t!i!}Ϛw: N7h( sA{Ws:#&w>կmEnߥEx밯녈fYПf.zm(mޖ91 f[nJ8V_]?oy4)7gZեOťU|*sɔgK?lPӯ3=ՁVh9B{ʋ-eCfGDf?9 ._\h[sNcUv>ΞjTu,ơ+~ ھ?Xc20*7գT 9tڡ\$c+w~kܣ|҉,m:ZM*%'cMʎ⸈shwQkdF7/ +<ю89IlX^ V;` M'EJ)SÑLubfw n{Q d:=*m 6Ԃ V2+I_Ѓ Fw{!בٙeS҇ؐ]42.AT%g5st*7MބVm/] FVtĐbs ȌF(S0ClPy0S{tVCS3;C2q Jݻ5; >>?+TN/ #{qR.I}V<]DRGFgO ͬ:3wjśr#;/ bU.ɥ#& :_u-S߼MG'G|'v.: }kî[OItHĥ%U yxqyODE01>grӪ0Dl5bM,sQ2ٶզV.. TZ@8Ƃ.QP?OQdש9 L+vKEabv4Qk&n08ydcn$4Q6VNRa5H53sC-r}St ĭ'yR,D@ܚ$QDp' K-&AQKVYtg ݉q {W X %pCU߯+8*TjJ^݈"Gw Vz^]۟ngc -FdIkw)x М‹.h) >bv-fkɹ0 Y5GPs&sm% DrHN FF(Y~݂c@N $_$ [%&}[*H[LLŝ 1=^p5Do/uQWYPAWyV|/_&p[ -s8=/9ˋHŐ$8$6FT6뜱M:gN)s1i/kg:gҚwCsӝیr0*ey\ĞPE[ d|뒻V2yPXdt/S8x9Ď+j.!VcWUL>&Z{귈NwJZE6ix(sQvMGyBXQo#!]թ4:س<l'n5(\jk2?o}+gRt?|lݴy0[%$fW5f. :l.waB4_-Eo9 ŭKZ8 f{Ng/hSE?;C\'̾IՖR1[vKK%Bmc\Z ɋ!ZC8L8)<9V] >FMl&rbUtXѦGmoVs]']euY q>Vi7' HpD@qPq"Hԋθ!`PθoT7<a8qyBp):WEQ=Džt> #:F/C]:łܨ {XիD- ҴexD=Q 1A8VA8X˶T=cV˨"I1x~8H?Ohz=W_N1w_yF* ,4屎$Nb2Bs9cq cdf"xSHSښչU:WQzS pAOk:dyB5r\݈=IHZ*\-[Un4䑬^W9Egn[qЌ8-UWVKlJikD3H+YC̰,Oe&ebYbe \i [t;O+B>=?ԙqn{k[q50&exZDښE? [ޢpkTAb+U`җNÃtùSk[kW꫷6Z*P9=ܩޢswɜiwL5eÐFtx嬱xswєƢ!!#5NqSAu"fsZӊliݲՀ"fܵAwC[;[)應6=a* F wD(Xٓ{mh\?U}pX%w3ֱ߳vL O//N݆"fթv^AۦfQqV(ul,lL7*`4ChJl BV>8lU% %e68]L hҙӶ8l:_OHoI g2hO,\"|Kki![w'8 |2p#mb 3h \# wuqpʡ;/67#)Сv"t6'z/";\y[Y1]inMSdv}|y#92&'ɑ V-܉2'rӥt?3qg;N,TkY-B4%B.o2x`bf!*tUt= Q e8! 4xAe>uAoH`aF3ÛV7,&!cLL]Z,TZ)ӫZhnU?aPDyI`M+{_v|IЊ/F:8XdnU@6n%~2悬fwPs- Xm!Ѻ~ŗJ6(v0^Tb4"{qgys/-hwE˼+Z]e.K q(h ) QdRX%2L`x*bũy30R<.E}>|`ap;>Z> >d8qyZȻgۨ!i^s0U^xu0Nk|}p̩}d6$IhX: bNPh6D\(RQy7 E)lHviE ۋ2-t%HnA:0 #H W$52!ŖvY&%*/A% wZqZj]QN4m@ U0DFDJE`q21$V$"+y穉L.5[9r-;pc/Sb)-@....α7D/4dz؋3.oދ8*W.Ԓ7|p}];kɢʊVw8Iܩ}XH{PD٘Dye<;nfЈ1&o63A͔KH 9Y5<'_?ѹ a LPP'(Ec bW[^/vS,y$hxuPڬA ցlr*0^l蜭ի`(*ׅ/蚻;a OM,=s7-f4ɇQ`uP}nsum=}?2iժvFN|~@KǂЉ_*'tUpDL;o뵊dR݀k`!M{G&nk: [.u]+I<(3=$|;fa@AR1V=G6hX0mA%4Q6VNRa'R ul1K{ .BR]Z*_.0b3%"'l@#dߒ}7Ό20oXxԕp_ R^v 0{,~WrtSƽfg2A2Wqi!Y0V71B,Y__}&*h_ ra@vʹvwÍE#jv߃>jk|~2(φ%g820z۴yaLR)Hd2]~VgT_IJatuQjs2A?{sU?L'f雔5Usv:x6tvzʮ|mϷi>6s0%&YKi0Axז -Ŕz R9MU"ʮҏ8 _Z(@H9xaXb+TFmDӋE$TͩD$n %D⫽xM0AYـe")J.9 !kĽRC=1jrB"`0 Xs+E`8FtTiGXSsLN&Ʃ3QTNB0m(2?i[@reUkW8EE OLF^~rhZd̹~/*Ec}5/1b}$rԺc >68qV/ N.L1&xʾ}y8o?VIjy)F/>SŬRUn.`phɳ L!*I.~X͛NJ_h qqv(}26|>`8[NjT FųVH_QaD+* ڻ/׸GJfAFNW_fip7^&10Au\ˠA^ ţOfW;م`}WWء-jQ<~Rf+fK a nԕhr)<‰/1p 8ָ+/͙k}K47!7,x&0 bjJ0* {[@7xގ uQzCXv#&8tɏ: mD5.(ŨBitpig+ӣc-BĬ}IX Arޞ~/X^7{F룾نOఞ!YJ!ΉSKմڢ=3f | R PAd*aeҞU;?_CyFYXiԲc%i4Z3hR^v|UCNY2!࣌)D:7T n*kڥz}U_a1Fe)캞:.ߧT~.G"迿ٍ?c\B0&in7V\r53r"ZART1&KNc!,HZ(@B4H5% ܪH(d<))× 1!-=v.+*+FXDO(E ];2F;}8 =d}6 ID}8+ޣ !Jk*e {YFYp} R]mΊm/\8՛']@`Y.BӸPwx?}vo|Jw#DȨk!sZcU*JcKrP+w1[>Mb(εFQ0&3e_"UVs:tXӒεDζ+9uJ x[َ`;'W XE$($'7HG12+3RA<VZ@bpJ׈Q{S<GX{b=tyu߾~O9};7 5rzQ?"缭rYpnBj[h4DXBfV4IA]u`8|vʳ^6G_^e𨧕P㷺\M]v+r/Y|w \!LK7Ƚ3:Ľ "]GRPEO}K}1 ZAQx`ۍV`Jn*+~)rkW3ݱ{!e!ck$ '3o!6|_N)D8TrF   %@eXǕ"(d $7t)K}Y8>jKDiߢoz>APJxA'9(˖LQE 8@4D*ObV2ޜm5s01.gэBH\Ee5=RKsRtUT}s!=_W߸h%Q_TY]N# +Qrdrf-#;rлNRQs]EjHs`L@fe*JBW@gv%/~=S3v@ˌhdD',{]YSIѿLchܤuA?{sU?L'f4֛=8r([HvjTXbЯMA8Ct?ħ_HD[$**״^x0I]Ouh9=soW29jf'O/=5O> F*TqPeAj*\[ | [c /AAY ^pByg97Y m7.~q;&c 3N2I,A)4cB X'Ap,bklӠS66η)*'~Ř u0!$$9r}"q:>y"5uIhG2{+wbrKa_7fqm[_[mp|vڸw{j}ZE)wZZlpbj{ _W}HwՋp)L *ꦀDd䘤ްh;;mn颫J-K C?d{3]RI~/]`{֩ qF@F[Y:NS.{1A2>C(+BS%t80kKWҺȅR,2G_g5)ZUwd/>/ݵEzsWHx6Xced!?O{m n^p^|C+Ι8uD0*P*8LR5Tr`StkkE?ގ?>Y\nUOގ^_ٍ( 9h|hW49&GBu&g69səm:sA8B 3(hDa5U/ GJ!;+)#=?J?T4?J?~Hf6K+$l.9Teu2S}\ Ⲵ 3)ʐͱV7OTNh祃hdu@`BPg1?JTJ\ 1RQb|d%V0"h-vq~0K !J K1cA2Eh%n:<4-U5@VK=uTӟ*Acx{yW-HPAN68-F$h 2>^5b  p Op{{Kqs0ܗh(ZQ2>A !ǜ! N^y"]cx0D;p*_{Ghxn≠-1Oիg]z;ǁ^fb(Hf߰uM Cq = ^[&4Sns6]h!9~y~ @j^|غkKu}dp凨db/&|eT4 1GW48_aH3|7ڥ ըh%u'u)uq㥖/08Wq.ƫ9aڧTl/ LEln.Iuk0K|qV C/xe%Z'rYvRإ1eru3H$HAPH8{-݀r^FK,\w+>9J~sQBym1hws؍=u.;?ENcO~^K6w;KsA_^F9](4T]R_pښhG(q}v5aM!"Ha("19VeNVug*`qe+e%:2JTU0SE&" (uu$Ue(Lju\XnuaT"Ҡ64o,q=j#S,1Gh"\s&Z4?wqkuĶK,%b_E)k92Ս2@[(/ vzʷQ0e:o"L~4/_70vn$;߮r:DkC.U󷖣©55?w;. x҂; FD)j')}t O!RX p)G")HJQJFܶYL<2! 5BҜ= X- BO3͋J2'۸f= 1%|R&R:\`D,KQ3͂%iMewlj&E.{ /årN103Cn:͎tFdqLĝb{W7o_=ktryD]aFM\c@[0`|EZXY\woZOi4}m`d~')Ifŷ;Ae2"pS0 [#=.8X$VGTKByu[f`Y`D֞3,Gxф *ñ z3)>Ḁǭ~DwIkE\ TiRY ~˜졫}?6~L^zScFu/X @=@$e(\W y6 /T_+U./ǥ%Fvej)@2ɒ^!nGxPeaEKL,bj pv5z;ܱÏ^LË,sN#G~]w1La6c$?FO՗^}V?̇j.|~v=߿ۓoɟyݿon޾}YTYw˟\]/?ߙoW7{ޮ썊V!(F8*gtܢvo37{9 үѤmQw3?^L',on>Oݔ?HFN>:Q$ b)5u=w'n=JgG~6 wap4o6='5J{lN\ߪ? ZjkҸ0jp+WCYd:Fw W㑦ORV+^ymꛙ *wod 'oӺ O0fQ?p/~鮪P^(HuʅoWv~0 ,! Og//?ݼGz|<{ׯכOa"|)yOm4|3Mc71~25Y,__Z,5d0jʹ(ѻ?7;LBTXgXxb2M׾N5UF!oL^i)H}WS b v<-4@yxJFoBJG(R}@wR!7F˺-LK=~s˪[Ӽ-˅O%6NomI܅9. QL=gS0wՊZsHw#GɆin^& =\@\έ˜=LE9 77/G xŃ 5y>토<'hhk=$Տ~툂B6Mu{!s}(L-ɿ8*_PX #"RMt>(&m%mk3f{(>Q;7k3FHM#[f0UB2p ɁMAuǏfsPI=W1P6tLA((j_֓BOs iNLт lYu DR9po@0>z$cTBoBI 6Ď!bvl]};*VbJes[?D,"$GMk'!+>SJ!3fhMsl ġkиa|{J-*)[i1))4c0!PZkOh)GB МsHDެFDׯda MQڂ-TeJSyzjo3蛻I¾>ا.=ЗQ6BJ . OO}PSe7KSSS6s12P䬦\[ʙS FPPtp8W1,@IF 9mLH R QRnJz,;PҜ)h8N,$!&Ll$uC^$v'ShE?mD?d ;0#Œk\B0'1djAUنsq=Zng~/B #LKYh|(ꨯjKP\cJ/Ze{2r)I˜TzX 'qq^u7IE3NKa0wl© g`6vlC*3 m,,C[km)_mPzy J2YiM-HFI v(9ݥ7[Ii^h#m-p>,8f;ck%R*{r0{1r0Uce 9]?1cpڏMu^ t:qk?^N-_7%-Í`fݛ@@MPfub11FQ"EuF4>o-hhU>-Ezh^b 8mF,3nm|6fI!R@\lXep a49f`jل &x4{S96xG$8KI1y:Qf2Tk6G0ە! Ikaƕt!pU;'R)$lž3visC &F"5$俿[MJlMm osku|sIx &V{UsK)%Bk\(\Fך}?E Wh7Ϋhvӷcːg)71&TĔʛbJn>A,4V)M=,[Ÿ:XT>)칰)}Bo,Rj#LЁQȟ.?Pxo'?Ȋ,/<^$jXĸ߼FY Fk)*8&p7TDx3i 3#%fzSYaX >( >hobJ]xvOJ~ ᎞vb';V9(P@%+"V[ ` pVY2$ & Dim$`) oô di̓8t:HQF)+WԄD8yg"I U)Хl \QPf5uVwN}JHi9abьx|srX@xpSҞxXLhjJL"+xf>,&$ǣv`# WoVp}xk  XK=-y -^O_49JcDuܟH905/k.=~#^,M3[t Gl.I6=3&_2!Zɋd7}±䦒1N`UEQޕX/a.%s!6sf&gg2+ϐ\i`vauaPT*^f5I5xHlm%Lz7UW%UN9@3k0L2TȚqjc̘IS5p+Mܮ;g͔{ x"9E a@FR 8#Qa{zw;^Sf' 9??.~qgRxeALčaY<,3yLk+Vӌ&Qe;td/_n p۠2pW7(s3ntgs $ m3ƽ!M(9In6{ !<8 ~Kv)c1m2-! ##IZ:qy6 a~g`C/9NGÑiI@ HLRvR 7Jr3#6)i–`SB&l9lX_BN@NkY pc>3F~/Ohj5 9O]jVǞ$Z zRtJ9wzsه=nؔ[0 [ vrmS`kD1ALkB$瞦Ø$[)k_}v=89v  ] I-cE;HE: .}yz7J;ԯ6={+w2Dš| H!O$Tm({)3-. ff۴\#3ᇇe>9Gk<+NWhVǮz9zw3Տŷu1pe=ۦy/_WwN`/1J*&m36eɥYbK,hB.âpSDk?wkz mLAUXO[WfED*ši:S<Vj/gt9b'zڇ^6=cu/$Ԛˮ{x}q̯n'j諸ੴ}~vdϛt꼏P)Sv1va\TI=.c{[ث]^ m\T egJzvQ%p 4\Z+gswۤ|8K)O>Y =26Ojk;hhY{޼}[Sw"x)ҿ$-_|ݖ+-Ԏd̜0c  w}ɶJ ]{lu 7/4SdFy>]g*y S8zg2*1}\ u{Վ TJ w,Poz!j'\և,+]M bKj ^6Wt}qa3uVx{z}4=>ʵRօ;!UX]9f4%QQBtPmyv۹vcB-p"e{lu .R+&Ra堈lBAC[RЩAfBcЋiEe=x\UGaV+@. y%KWCV <UJny ۍCDZAW͗1=K*E-bntE*OyW t׏ἅu DyjmJQkys%lB,2p@P*vZ}{-Mj= q$սUP>)UD(o\}DTIrTa.s>,/џge(B}ywT[yTuXߺs! @k԰v*~cE|s.y9Z~jshi/"4Hv}F|7+ݬ?u$JADW$E)S£&]ߤ ;OU{j`Oj-p,*.i曶 U=O;&3U8cZEڋ-f6;9tw\>rw(Ο͓, OiTOF0x]]l4' lgo+?{X_}Wc[T`"rLWqeLxC ͸u  gьfYP6 ^.+Ѯ?ŷyFvuTwPH3B!^;s3E(ZUbwz=4 ^_[, 9r@ԣ)RsFΏtcpP0c\by &42U…%ޯ e Ήx`jJ72=3GR#tW9)&!r 1+l 6|;zq'>M3ˑRr*U鎷)O"[G "Gņ`Cpy3Qpl \5)Z-FK$>SG\xt,ha3&8`vs9bཊ"F1ͨNZQojn0jőE-BhC`s,Kx"vX!j ôf&<kG[[,xkRjr sP;L#^r/#-CSr.(;X)c1)HN1H:0N#%02 jhR#{crQṁJRҞ` W _BE!Ǫ/*i4p M/@)(trM\>C4b j4BtRht,xz/'% ӫPT E9@innhL[^Ss" Uӊd!VNl=IY([7*Dnu 8:`8Λ(pŬBa/kw؜j2(X[7oQh;K:fG`sŔlµmq#cFd|YEL)O.v^,OhP#S8#5h=&^1S58#QƳ yA"r8Un}N3}K$R5JJTb:']s:^/= >>Z#$7cM)fVu='t" %4h*T/1tť8Ɠ%͚3;B Ro%xB} VӬcZmiNy#H+._R4?(Jb47ʓF=҆ %%)so s409tc28U$WK<%Gj |Aa cK[jIϐn9jQi:*w$[+yBĀJ6_ևk"|l! QWQS=z*PÈjye?>/"Pz/rL*Ы/:OEQeg>nhш6M9[eF\1KpE[)+C^2 jLiyzjp4SAhJm58#{97[#TgNCr[a 1Xƴ  2Jb~Lq$IY.a2VpZ U gd 7M󰸾H-QVl$EC2#@$%I%pIHHt9[۩߯A2%Q@P$y*S~QD׍Fѿ:ϵsbJ u'@3-שӘ.2jAr_VOe?%a!_\yEeɋ`O?ξXWx1Fd_4/i=#7_tf6BWgnwsgnWp{y/`lGӣ/ E8OɋQ /}ıK4~QɈQUYW0軲]tk~!|Fe{OofƮ`xցY~~]>vuci5v]nǎ)̝J ϩ!M1R1LPnX8sT(<}jtҀϋ QjCWl0b`eU ,iQ9Z|F->fErMaYN#uV<;Ywg?PDzjN,Yteon|]n~]j]H.j~ola|{swԫG,vkf_TfrEB)·q=7w^CLiEB!э5fC8SQkpġ,7NLrJ t:gx` QF7Q)OT[ 5@d kTxs8,%0%FqVaq kp`7^="A+bMމ1DZ) 5k m,АD*iz*ؼoW_֎drn\,x e\ji:D^jf T:X8PԔ3SR6JpG%x j=S|}[W(P7~vᒁX&Q5T, gĻڼp9 M yI^dkIS!PqJ#\4j[R@*yCzKjYZ:&Uq[S⇛b.Od >\\mo|fi~EjK}ˋ[X_r%N܈גM UW]%YzklO?>s7߂+fdU.sZ376%HY0׵O?Ϳf?\˻;!3xOw:BQGI+q<_ +CDZv4sL K2 6[s!xRuΣB0o@9fwon߫KGᇇ[65K3p}D3?OFͮZ<|͗ŋ۬9y'#!f#vIיϸC@P@of||u]:e15;z135*FpdF⊽ysEHB%lX*|_wOƻ֖J٣hGAdz]T-Z0GädastW&\:Kt羥p>BD jū.%5GĪҽwY:jZL6ntRQgoFt1XyUm8Oƕl75%BvIU.sb[p:3cfkbTdȝqxOyko¨U1[#*D3M"(>b!: m8- a SAڞ *:b}ף0gK18:cs0KCwFBSH_(ȔU/!P>W-;-Wc!O6sD".7a*HLXGRQ$OyA%O@fJ#+1j7\Z  s [N#R>BO""Q挦Κ"sjGABe°c\e0d@2&ŀ\ ,QP(T)(b\V^܈B+$w nr>;Pd%u4gcty2ENI)΍23@9ŖeJ ;ErZ2مy1B^xH_T9O<{HDx},CHsXdǺrJ" UH.MQo8FGNySR0rpÌ5kpTT(Ǩjw. AD0dXrQCI(55u/y`p}[{ S=QyDm ,1Lt qТ+$nSoaH1%HڏN=ǓC$\!~jpC79DF8Ѫ' :؏@*g0ɪܔ~SNf$-=BDTVDń@Qo^~A+B'e:ߝYQoS>dO.#hfJJF-2pe_QX BJ'-d67;Qx}$of"`|Yy0)փ(qAGT9 L.مO^ K /R*QF9 /48'G/CAs~ʆcK8(h^)Av*4/+z Q =I.`Wa;z: {c:/c=<#/[**E+ǃ q F?_/c8qN8q8pZU+fr'?,r+ \e@`%T97q8m8:_|ޗ_?FNy1gv-Cl Z]cϬHn%%Z8(Z|"Ҹn_zFKO}ZloeO3`w#,8z''$q+X_aq5*|Ӳ1޺N ]Owpw 1Ow+m|ec.ԗ|EeW]0+y=Ywr[^JwO3o)"Y ir-ٜB\JJ cdʸYsÅ&kYa\̶1Rfk:nVYB2ˈ!!pFʐ1F(<a. dl $#W뫨l]./X}blݼ )M9DJZ)xcKcF޶ZLN$QWn\$iZa K'A~igĽAo PX֣3Hqo 5njVI]>:i5 [܂4F=i/tt -^p*,tq 0S0#TdT؈KL-_&QB$] L4teuTp15j tԮSV+@LԄ(*7ZRh#9KǸ &ur # A*NAÙ%JE*E QhЖZh#dWj14A حhԹefыWʟz/3ts%BtǤz;8<4/?XB`rՁ=L(²eN}8iBް&ZK㥥<ɩS?&~LNcKDVbRA2Ee&#^hiW~UҤ6Kxck&Z2-' Ep\Bф`*O+C42ODJ(5g?zڤl ucC3VFqae,,:~jb^;yqh VjHYAV[E:@le 3CDΘ~GཧCSH:۰逷=`SPff'[ңa"<Ӕꂂhwcsִ*V v$|#2cUM] J1&@zqžnx ۨSǶI>":"uqkM &Pi %刍4rHưcC3*23*vB^;TSCu+0Ej?ia =m|#)7>])%8`YKPQ;z^„T$tGD+/1A KVW&s){gyPr^"G0Lj[-/W޵59n[鿢ꗬ, wI6Ů.{dwKIm{5}Iɣdfl; &-75YZ#ޝrp!(8ɚnv'<( %͎*,,2[Gj 7G۽۫n>{ozfT$iDH a9aH+b@4!)7=PTӣ؏ jJAC( QAjRkC gH9G)P"LDHlXF<)UèHn XKNA?EcY"t;xl@x.s+B`+u^ L D1ǂ (-KWv|eOQZ@ 1ZqWLJI' M'u$O POVPNin#I$4)v,C(5%r$"IN OzG =$URW' ܓ$5$C%P1=m XφK4ҾkyQb(iY3X`*Qm7)-@s:8/5%B EwU-_KPBf|4NPP)b 8L类+={LMqԘ IF8 B:XLK Lo|?ոuo*'0A1 JC-NS x^|_aD'fC}y$}2Nf_,?ZG̾=qC^B M#돸#:z_ `6*{e}}w6!.I2"c̜ DB` ɬHګTdNR%PٜHr(l5Ʃa"^[pbEUjQgwʥ:ܥŒa)fsЏ#voX1ny1)KXJG%@Kqi/p^=sL)"=ͱvTu8Nɶf>MxL/|R@rIMpHP$ЮcBN)&0U;x~4*=L$?`V8لabu *󈫢w|Fs *&v<եS^)H^.\0o{LrYPZaIT3D8uPLJ($I N'^T,@L~]%8לYr) P 6!M˿kBa7v! BEc#D`|ɊDԍk@`SÕݿ " M4F Ԃ rdvJ S&S[Keo9Q|7~GuQT{諌R5nkWR `'2c)2p:$fZl1DvWc;`,!J&I҇*eEbyf-UI&Qrn”42qas $(;F02ES2хLJs~i6U,Ҫf#: D>Bm{3MgBf"U|iİQf@ 3Vu X^ J+MJ6:%e1%N8bGHh2` L,*Cy)$BeCSeg@2M?pUi\mh?^h?о%\2Dl%DI 0]p"9@- x5gKgPOp zx`?R @w?ӝ32~X G6IG 5UVOb"$ ketɴ3~*psiJKa. $Htzlo[wIUu.@$bv5G:) aW7Rxo&}AoF hҡ0{!͂"ѧ(Nآ^lT^'l56bl#h80oL9c I2*Ae*%W$n׮VFtT5 |[; '@VP"=OE2D8|ACަwSs۫!zX-c t^!wmFlo4a zAzB9/FRew 8ִ~sfTrOw3[_=l -?`)b:M.u?$O{!̽1 W A8_i؊K!li9VfѬ(~ƠPOugd{ -)hZri>xw(— K1d/F_{z ;d=i:sVM+^oOlxˮ4O7Y[bhU}p`OV184қ}Uhtb yh=8[R+JU;%Υ!oA`XLF.bsNwdw$nIt7M7x_|XQĹŇS-˖=--̄bd#a1 ,I<,֫i WUd=[9ph#ͭbKz;lhA(* ~=e 47'ujuR(>әb~2sUL!\8sf6*G"nw ,-ݬ&P"WZ@O1td8RMS(yƤF; TQ\ $`$l^Pb*3@CA2F`Bʹe;'̔*#1UH%K %ו}χ|[%Ί95/iqGE,J"| `!lC]^j=ѮDlWj 06,c+GltcTsH`~ T9^^YL a*yrץAZ,73ls NBzS8J9oW1, ,| O8xCzYH{}&A eWZB? ˠnPDŽDkE橻&F\1='iӣNH1J :m&8 5+-߽qPVMTI+x/@S#v>_|))ƞj!eƸ_I& D\E [94@H H5g<#"p֌UC$8:800g /{2j%ٸ<%)/p:![̟Geb){s 8^~B `JTKT/V~tff~]@D|[A1Gv=υ^{Q{#?rf6hi.{m:t72Ww⒌tj՟6Ke\2:9?wcv+ }޽\ϭߦp"B2oXjix9گ, ; W3pMoC|ǻ/kdY2\ů E4F8dWwT n Xp,B;Upb'm+ E4J|̩v^GA~v1+7i[7j&$3(pk' "/ݣv DtbQEhyŎns[hm3q>4ibdպ6”%eTF16-LAEm[Q5uyG}| jڍXpUB!/zqlcZC  R#Um FO\^vwԡ鿗]{XQbءIET7LJ*zs;N,Ąi"_ .‚maVm!}^lC^|[fo F=rfb[w;8mzUvJ#1lIo0]wط[P9`]yf↕ $aoܗGUv:A]μԗCKC,A7}&yb>Y>MsЪRI݃_6oJz>BqDYBBq#W@MgTiLk m8J(*Ͱ˭{~7@0<(a"EZQ~ ӭʛ=YLzdǍK%9b=wDr4>D6?d/1xMliWQ69P"id,;& eU Sl_Lw9¿||Z8п%azJ%O6U#v0 Bl6 B0~0u4  [r K@ íZ;^8<^5H/F }솟O.Ɛ!'ܯylA|6uiQ.|ѾTCJ 4V/&@9IoΊ#Gﯫ(NT T:u6;櫓ѻW{Ydw_ֿgj`fOں_Owb2qtJJ6GcORn>93 8]QFbZGg)  ByN&wɴ+FEb0j ÀyY$e낎m`U%uTbyTB.e1/`#SD1}9m|\?\TP& d\wGKQyΨ}pdGgOoz U8T*g-VuGΖws֦][iכ:JK-Jw>HMXGvK92q;>`4;kP#YKM:Jo#G?T'{5~Kyoc2rPp43DAq14OpP\ xk5IEC}8()ݩjFңF)>:[e8vtQ)҇s+$CCB'PUҫ=R`8m} 7M+Ɖ>/IAEzrG-0Y_xknoo.?qQ /ذ&ԧǟ~8wVm?iy 8W вBa\@ȾX3#d0LWW*(2мmcSfEZϺ£>t%}謭+*-ii,IZ!l CJƀMWM@+!B鋊**ln߃% X=' \8kqS0y:c; {H63|]<1Ǵ@$IР2VYm<<=%hj(}< )M)-OHUJfT7|$-eO?$qb0/5 = Ȝ5gyֆ-)XM! j0>#A+ʹ皓I&9R&I\4}UAA+w35]@1.T 5˚#eBiP#\sqͅh:"k`Ehz0# FHC@PCN(t &0VBZmCξ$cEQ# 1E-vKZQ ܉`1.IN 'j-(:r  F&ظ|kJDMZ!xd_ԍ;{:r۠&u&UߞW=u#luX~xO )+2->G7Ê],Gy{w"OW_6}ǜQoӿFOmkm%\c(>9!m!WpuGJ<*R(r=<){blʓ`'ODSG5Tj0 J A-};3٢iQe.pt~w..WW8-c:A ޡS9- Ġl%SzsFg9>Z}zr(?=~ksH2Z>u]/z6ֺ}MewK{{nRgjX+\mk&ZKq.{%Q0(Š FBkA8&qms i7UվgymI&{q.1,BEH PxX OWS&uq**Qhy_sj;֋lu:nH2x${vbY8{V䘴ycLmŵsAI@Iy!Lh XfZ ~-qgٜ@ݕ˫ş#ƾ[igcdv'p ŋn#O$ RWnLEԄ/#u&;xd^ʘdm`i{N{*A.3q:ay.n,vJN b*šK6W3?A99t&^9Bve1xbK\2Π˧Wy*1܌4ḩ`M£t+17=;:)7M5B.藊Qi~_[iac woI7ޔ$DiPA{0(&F>?$8 ZE' ͌:|$gXNn(*4wxu%#g%;Zmc^md/Q6!Y~#%k5z[L(K`ǚ"N17(`AGνLoϿ2ycF,\f g DT#[ 7޾-crڽ5hE _cW+NS֌M\[fq}pQ8ұc(Q:{d{%xy|1VGr<*pE{iF]a*.gQυY9g1gdg~ЂpD#d{ 2I]3c9Y 3W3kۼFA#"s0b:bW i5L}:%Gڙ|nE>8q//y*XgQr6>C???/ q֖wgH*Y 9%s+ʨ] `vt!4K}ďF &#Z%qFk3V`I,7>}QxNt47󇍤I4!#!Efq[}*5Gh{tY\L6$ ܃yPdթ`*B|@i/Y4ǵ$f6d8Od|x.oɁ|vpu1r`z*NL}.?J̷ \ ?J{^]*R3-fMpF}EY gI4%p xl%E\21id$NFs5bhij#Wx/;WLR7+$1. YR@ΒrfuphFkfǴP0pҬ!8o㬖Һ:@67K_fnB =JiqNtVNSBM V-S_r" XܳƁ{KgkaVڋ_`4SC^TOBMr& InkwSp5*w-s$'# / VZemPLh" u|)ScER e9>Z}zU/>ܝFv6[>+к kNoH UZP2.57LG"@3R#Ǒ,4*2JG 99^E[U7JRL6W\<`wךB>{6EURݬDi:N-#!\lmq@~cxA \N݈3Z6!8IUpb&K(I%[,a)h7,ӦA 4*s[ڟ_MWݦh)>\G[Ge'tk_ |n}֌?9z6v "[ Eî\q˽֫ ZC*Sq|Y{=ABc94 j6lyj] .e\Jhr?-x*Y p^/,q$c8J}"k:څZ8 huc-Ɖ^q_(~zlAjTA9 $Qf9:j,cƒj9.m4`C'ʆ^|o5 yf6::FES&P`Z7hz a#T6YY{zq]Lgqzf~yrPsvW|N8w)o/}oGM$o 3!"Q&ߗ<ݳ|cȰrJ-x+~~uOY.m#CP} E (@z(x~Ϙ׮%E e?a+ze r14luL߿K517~×vd+TlϟÌI3>`[g2^R&N]jT_\VEs6Ob}-o^F@Fؙ[;͖~Pc쵛nnG<o}V}B E΋#y9=۬d5xqe o Fa7]lR FQvu4۫CV:T`j[86ĔJDki[VprPm4xQonw&ZW7yt{9#Y$U(a'@(.vK9]CBJCȼ 玡S/,rC鞅,b7g N/bUw݋]YwѲ. _//GW,Sg$ooOl4Z:{Ku3ʥ4]h0z}t/*XLٻ7n$eoC.vvxlm+;4N ҈z-hz_Ud=nXm 5L0[$‚\ 7 )tkJg6j 2 Ӝ(3 pe0]p#uǛ-|+;-媫2\EI~Sd|}e(?_ vkS ի mبT ^W'w"+QW¹c=2zUʙ8qRLJ6kV[7He{)uh i[;D@1mS7wl`˃\/րt*+3iA?;ZPBb=i=S., !v3WF`ƿ|-=8}>?LxU迧[]_SbWE.KqS6 LhFO+7>rf;~٣*dFW6^^X{<gdGmmMN?>=qvT|WVjW{uhO`Ϸ<b_y3E=]!B79(ĩ `*CPAy.s69iYnSf-YXϦѺm e;sGN_'pS)2t䄞FiPho^7Ca>< ?^}Pw3fjUk̼`‚u͊}w֩|mgL+ksKZr4<,Ҙǜ^ ejr0' Q_Cu .z_MO>-#/"zY; pX$`O} (ޢ*3uBm.-{ԬʪضdU4k6UN[oni, Z5c0(.q)d,*hĎ( )YwڲmLPx By:JYGY@M{ԏAy !iKȊi4fꣿ<=ߩU`~xEŭC&{u2cWY &ڟlF,ip?ꄹSЩBΦ'b:{s͢m#}e@Hdc# 2M giiXf62;> .E(\q8Q=`*{Ȝߎȹ\| bŁ1΋)C-bĞUg>9Ԩ>f;^̝k5r/~y:\r $@١k:~H42a?}M?!( P30 r L-P*XQ30jEPS"$k Kɱ6,1o8ԆQvb*K}Kځϐl  ^MmxP-7Gr%} Si jT_]Xu5vܫ1it4oQ8{GOwh|d5ɉ#@ e RPcInQ+2eX JL|/OϻB݂?n#Q>-\AԁnG>y>"#癏hOӢs`A.>&y x%dcL|WZ:ᬂl 7I-?xmt'[WDOGiq*N# #QQL(aI%r.sÉ`)fT9Ei.}4H ,[s0U8S\# AAy17,d!i40!#yf# )tNKHf|Ҭ8Vɤhb}˳W?l&:trR* 5ћ3H1.>+wKfg`֎Y‚Kg(31c j#> M%lx ز 839GZ'> 80u63Δbc>NG!xA5}'2XKK-4JÌgdS e8CLB$f8dVJhG8(TexU" 9pbs9}ui7{(RuawB> /sv@^RЋ^8_9H#bbkѷo|L\5O?>}%AXRo-*5,5!U* Uk-knO$5ѬBY}ew>?a”W݉6) 'S6 Kd{r뤎~cnE0ce3 Nc>>0*ѷ1,H޼A(NlU[gepýzL|0c")4c~A>Ẍko+q, iT,~Y:Zh VnwknvgV.jpCvՋ\s8w1_,vֽ~تm e`pY-DN˶, lc ۗps! bt[J $9c1W LbUX#QFZM0q@r 71w" D^#,>F~ !F+5:juIFfpr| uFQfq  2#K@ 4D7:H(A൥C1-Wxc'AC{e#M46HF śLJ%Z 0CH̭&2!皛(M2L43-ީ%w["]{mh#}1vYrƩȁq{حo&-r{9g0ژ\VL}!`+| d_*$@J %_;UJMBٻmdW ,hK27$͢AITƉkgvߢe[)SnL[sŪ"Df&Jg&TmJJad8Va a'a\(E3,b8 ]F*f>Rj^Lε˻)m()ׄ 0K,)dZ(1`$a(a$kF+cF mWzr1%A2viP?D jlUa{wp|_LʦFnv~DU50˂ y_9~ࣟ`!ADDDt 1+!|"dvrwHc"_ɈK:`b拿Ru{/0,h=5Ҡ{\t m _HﵧG(Ps2\z<{7oYj~)ptĄYu#ДQ7?0+ZֳvY">~Q1=^}:&lwGu"Y/qe̽We0ܻUL`P-5 a֡P"iwujS7]Qp)R{L{W^>9hw-F/鋓5ly7gTHhD;5 /{SUHhlFk t//'jB1B[_c3oQp%YWE/qA>[4K B+JZ \wz,ՔuW1&JG Hpb"~M9x5:%b%M2cyQUkJ.gU+Zn4?_է?&J!2nuE:Q[eO16_kM "Hg婮 K}~0Z5\:AjP W4EuE?BEe+y!3Z[t]8Vx O^ք3X? {=#0l1Z!K$L"j(M יMWL`0-M8W(QIfuqݩ~dʽS IL ұf(MLBv&E@L++ V)lكЖ'Ϻ9-O-<< +رYGBwJd|3 qØ|U\.1CrV/ Vp,kTb!vupLCR*?U'~5QuZhI.T'mkZs*~\ K:oRc,\r,qg9g0Lq䜵{NgXk^Jl^b{l/sŘ"J@e"\ٳ/ip^vKUipe8h\-ÜƓOӝYލrk0*Or_UrYhDI)(_WgUgTB*“c"K} tc̶# н*&4Dݩ_"0:Sc, ٰkz&<G%ӈ0i N.݋moq,?𷐽xkJ ^%) IDHZkVS>/Ka/\el?xSMfX㎼k/#T`d5wovLy;Y;.`j$ 'K${C:6#y[W'<=*5gEG"ϼ#fm:CR"%J}s4V_c5P5!>99 )0Cg VH@!Ɗcٕk=:IEfejG_)Τ(pnP^ c9#-"Z޺*wxpܸ|2NQl  O"K@d]@dU lcDLbK&l#YIg2f6MAe1B$5Ijp1O}Ef|w (m Yb*xa0N_/ˇ:sեUKя:h5D;yzz f&A\VNL"L"3K$I-IH3{(hdRKW_HIp̅{g.;sh.%kXdµYj)8ьaDD4"*Zc VX& tEab$EALG*"sO?o9ss%' xG1 VfXD0i&y {k{M`j0ݔ8_m}QϔD,',I V>8#L$eк2qBy ]:+ 65Bil"V$ΈȤDN1Fbst %0ti Њh [/]χy0mamss‡{%c(@+8pKqd$IU d| z! R[ӫ _:@k_qv|HC]{c5A#E\$պ{EǒFtBڌ`D幮S7Seu+e,x5DCHkZJ``y`ǒFE XAΥ2%^B¬jE+?ki'?S~ +f ®PPx]d"RҲ1[qӿ `[䵏sSR3N?O?X3Y|X.UHhd$VgP~kFʧB(Eŭh3D-t8Rt6KN< ӷ4"LfwL91v>B1UXc+lZw߽ĝ?sPm9{.\ﹼm87w7Wd^3>vC#G8#2bq#CcIS[:޻ݰTbKgAn*͍&F:s!ބa,fXfMLPHk wd+8\}>;B 2!ч)@$ PKVFgLHNc"RRd8VaS-XZ4"Q%3[pj4.8#M&9ca BcfIʰfXX1p۳ic3U}֧/2L*E*u??Cȏj"u'(=F7gnCH`?m09ҲYqܴfo#C[f*ڄz^mJy HDA!0AXO:Ӝ^|Ы9f|]2- ,ٺ6aXGLpcD6(xNRWVyoR F wyY49~ZM!^f>'?Qu9rvzAnvr~gn;1q.2gl&̌'3;Ϲ^}q>?4{ 淃eC;?/Z;Qha4O@~.:È&Xh]`<<؇g% k Ʋ"2׵5Kͤg_gP8/叴69! I B:Hy/~;T}7acy#?9n2@3ݣs0Gh90̻"y?& [ף)/b/?jg 8=Gb~9kD =4x=}ƶzpgII}ڤI1t`Xkfnxn&fg]pzwɡ|K.<,SYг0.dG RW;];=vU rv'fVvY, yLsg kqf~0 *AimmkmFE9ټ_C0lq&/'}h#><`~n٢noq/F.~U,ů:eCLQ\ƥӂFh:(h1i6%`m4B!$[|;aԫxi?7pHp7;(Yͯ,(|'lpBc'8Bԟ`%K0TXXQ,A-HjY"$ɔ DFq$FOIDD>= f0)} |34v9'~ҥIzpm0&Az=.Lt-<,Eӣ:7|":1($7zk<#ֽɦhrNS,[E&ߎn鶗ii'S9 )ҵʂ=r)*z?nZz7_ \LU[lϻS;zM˦z7o:RS*XlicG}vG?>N/j<7Ydrkn%fm$w>깮V^D=TOkZ%6/6:zM˦}<}݄[C拁w2G"ŞӴwoBX+7Q/5)Ada)OFR?Ki T4JDiQ,AQ@֡y}5rϰZJyse1-W^A4%i:t4% ѩ$&)ORV֥][.?1sl_NJ8ݦfZn8bjN 8$9ԥYqߥt'gtRÑw J)BMCIq04Ay{I %B>չ*}\s8I E)7q8`.Ҕ҆}6FX .zL'6>C,sczN@mr^1 t˸#CAxܞs2rd`h21n zBC3]1Zգڇ1H vGhOZޘi),~]Nٱ^ۗ|XP"n( o&Ń%? $ip8GX֕Y.)QQq  x#taؗ8ҿ{}K@cht IX Zjg֣X ޼%_PD(XX3"V PqKC뀔'4rvАc0$%Lф$21 )@$E:B*SiiE,cJ?:u[ J _&eku=G=:`j5> Dd!epөu4Ò<=v{%1/}|Hc$ҕ .[#\nUx⑽T2fj`v4&vnrz*2Ġ) !$]9IW|u~N"j kH~Q]|uEpJaUH_*6;8w:PA%%/@gŻVVmBFNYV! dgYj`p(,*&Mc+NBG$M[1QqN$=ȎVBP _dK=&J(P A14)vG\i""S2Ec2 !V7͘_K %" ewz~ZZ.ISeϟ&To3:#i7+Fe>KdDlB>bKduk¢Ӄf'b8f0iC&>D i)F 6gRg(3I8'%S*!0F\/I3 n:"/!BH⩖2,r# %wy‘ J'wKB !z fB(jpGRx.JuJ<|XQՁ iKkY[/+]ik.Ey0joan]bQE_U_!MpT{\E̅ȵn$Ǣ_O57ځ-g/vj'WOT᛽>~˭URZazs{]ZB{e~A/3f/Ez罳/5CbsǥC&A` ٺFG[j 3cZ/(8`LiQU5J(T9L8_EMJwsa)*iq$k޾R32H_iU?qxݨUԡwt|QŻu 6:5BX+7Qb#C&a/n8m=Q)20G"%&Ee7i y&eSD RA`w91G?Ma0-5`x>L@ *c1 18Y*Һ"1 c`&81N:pݙeue8!LT /N1Nxq@ /Nc0 g/N@ cv& L' "'qBЙ@^8c0 Ag'hxqt8!L`zݬP#Z_(8 !$EGPD;"~}sӆ|#bǏOB/v?kmKN+GOxXO}Ќ݇ jvϻu=C%QwTKSN45uT]NF7~Tjw%*d7?R]̗z◨BU!G .P]C0twJ7{$2,BekAS#}X\a#LI*AnP=}Z5A +fV|iv>shX9Ϝ}e3֪`;yoҌ_'L2hy|2ݴ߾8>V%)K0zPg ߅6՗Cv8<͎NgIr#""#raˇE֟$c0\N"Tpo+/Zz!.vҎ?g+Sɝ.4ZqţSvB;7377?z )% cﻰxn,9q)c,!zX$bH!2I 1 EU&4>FCC.M0agj{_eZkp&Hfo?-dQogI]jI[rb_eİ]|bXd3"uJx 9+o u[1Ydt$B"6ANSS4{^ o :Q|eJS:=zoB۔f^.+n ^bqD f>~n*)&R&">3@{zylibPku8YpVO;ń 4A(QyDS$k SƚDpF=x[=S_J"m ,9S8ŵRrKB<_IqiWH- W'"$1^kAc.q#6z'"uAt@.h7Fn*9o uDa ʧ-2د=Uxe8+P~i_I`.iLKU"rjZscT˛pL/mIy¾M8vP&eEȤ?|*xd!d-eNmBiY4EB*ZQm_9NwEDX6) n^Wws#ٹr]9ئMR+&@`S._ЉwB$;b(haX0ʙτ<Ӟ6B3CCAg{pdž֪SrpvvZ?>K}Zî]K ✟AL 6d\ YԿ)2ʔt.Kyc2c ?j@s/ŗtsw~2dB.S\ &ˡ8?DsBxG.Q jF * !Vˋ"/6]^&SC^ln˻Ox?BJy7_7a\w+~ç/T#Jgz~!W"Scx:0w0ʵ'5/&qv?_lh>çJC4ӓt0^W<.&Wvy`j!F?@R~.ߖ77]U+ ϟ'EdEŇnЭ|Q φ5:6N~6=;{T294Ph5 Ζ -wSljti>~@vL K'H@*N`L'6IODa]ӥ8{dPf52`oSlyJϩVhg+ 1xƙx/g+sQ/$B+t N.w@uՊR_hd;%[_beHM W0lt@Sx%z;iQUJ!mGh1!y[db}wdVѿW~5^G,(p^q$y!Z { V[w]_O1N|q408I hN${}񶗀AJKƴ oܺF3d",al4ga=_]6eL雫}D/~G\Cd\#^13m*:;N`Hݰ@MyNMjn4" DAm6[AV5;ohy??Qq#/>ZpM8a G]8b6ˀF4 e"Cf$,FBċL_)؝ԇV{6(W}NM%2mXn^Uqsc5q vGU>pe gb#수Ѐ  g(~$ۜt"Z>P(LyDo@zy$?NME)#o RfDiGר魐FPÇ+%Vz9jK=)zYsl9-ʧJ&&,hɬgsrO%QI x+Aֆd \$r3[_0Z*.}JhE]>4Fu ,4ReCNѯl! //ݐxHh4mTzɦ Z8> ޵ʼnY1K]ڪ$:Wmoj܁VׂX٤ cĚ!LOȡϢ|AIo Tz(Hbp[#n 382Jl5tP1"-iwHa֠L j ۛTu&)͏ڤ5%ݼ/J6Džh"%ݽY-F[9j Zd_>BrYqkDVs\IԔhBy5S`wfDaNQzuQ`(W k}!ƭk85H_bYMfDP{H> =#"MbF^!2DW3p}M`tqx#i+%gMJ<= Z"zH: b w5!5Ѽ M h}~☁"\V6UOMh>JGL( PoN=k(&׳F: Zpx4wKw5˳׼ؚҾ⬥n@$f3{u%m?%s.8KXγ<5ڃјM&s ?K ۘU?Q򲺛t3 ָs nY %|vM&:qz*(M֏|*tZ*$deA)6^|u&+-2sYIeoVo,S ିbl# ȟ8$'ghmV; <)"d v9_GIlwHOjaNiBh.t%mI<"۠]HfO!󰾡)(zϏ2e~F0 obB"b4o'^X/Rsp0#Ʒ 9 LOySi-^.yiZ}f"QohohqUަ6 #tЕ0#D [2*O%;(qޢ=\?Ҡy>Tj\k&X^Рo cͭ Bޅx;0`ct>N$ɖ+N{dM9I)ak3hʗ3iT^TfR0lgWk5@Pt雛~_(ȿ#)xiцgw̅"DԴ+` >Sa >Sln3p`AKA iRG*9زef=ц(}.CY; m -$wPIr+==*'EhX.m#үĦ^d51ŁNySQ-&iqoc1G|ppbn{H3?B_><<𒮄%<^pt|s>ήqc./aߓ_?x.|N1./*VBޭ˟ϳKgn7 +d[R >Exib^M7>o>n-9z"Qv1LJ%p5DG'J6$Fy;Qڲ}OB/o{-5ZR?Zif+@`K2#{BHC1тp!a@hId0ɎSM7-v_m fc~e7Th9иHD 'ؔҽXxlOy_o-byjq96zOs.{).gg^ fv <%㉾"F2amYM9Dңp<ԽϫÏOe~Q{ͬTH rp:'h35\T2'rM ^몍4}{?e_;e`!`r@ p`=$XЎ_8cb\R)=᭢39石>A?t*PEg".t@/Mc΃a4 y4DXRyv1|ry~ȩIZ BXmqFD#"|idKDRv>۝`%-rdJ< IJ,,Ū%Ƥ.)pU )1@Ip'WT¿4S,EPIC[dwsxKw{~1 "4j.>V_QU'hakƂh;?…9$]5&XJ92zjhӥ-CPOիux>@ '}; \ !6ƞah=6iaao}ߎMK;9`p0К@ı4[ er4h0Q0k ^W')ղ;}v{e*mk'NnX!z[`g`:l)]qM< 'z3{gZ3h-I N\  xv΃1˜G9V *f? WZ{zJEu A$3r iE7},zF3!5c^BjW0f2uYowwn%usR+YS]K஺%5(5@"04W];WKCQ?kEڌ0K/}kGSez9iIC8){ldk#䪰4J!T'S* .~tWoar_aLNGϊj7p_|Ef4/iYÝ!pٗ%#RܖZH2?-wR6Jϑڻflj q}u}h7 qGnT:﨣ݎΒXδ[vk|]4OVH_mm}q|#T'AD<'&GWni R~wr1ЬPiCCfmFC6YzsdJ90WDl9qIܕk .k{),s KdB(G#;g++};mGØBr/n[-ba16{^8Y}i)1:@M<x5Up0ŠVN›~0VqE@f(p:&DF:*/>EI߿۟(υ6ِ/^t(m17arXķ ĭQ #w/wn}T4".tԆSMARP^/I@F oJX5/-}iIc#Ӗre-&nz CA9zM935rc俾UL& <xCbĘ5As;| rdbv;,@x5$+ʞ/F7i=O~dS 0KnُFWyxekBh!T"c) hw_W"ZBuŶ'Pսi?oکDcbOBiyY뗠 |jY_~M>TR ?E7 y5U ziП];(` q0g >B$d/Bs~Y^\`ʉZBBZi=]Qe^+eҿ`\Aչa  ===/`ffƩ1ZbÝ*5.0N!mMX*e1-G\߯l}vr~L苮ef"V)<izx?{ wS]Rɱ(tUa,+.EN,-zۊjbÒ?*/(jܬ=A%M+ی}xd1UHrKX]3X壯rJ^>8/YiX s fns{ _Q1wi;:~M/+\ DUwIi2/X涨Vxۢ͏K_Z~18zFTxݝX.FS1JJ Tg;ōdB^1Νݳ#Bi2lie7wb]I"RrB%,쒴`K ddjqPET X0 [&kzV1a<ԑ*8<\ut*""<b* H-BdόbO:^X"G/@e>S5lhJRU0ũMXD!Bq*鉡Z,:5 @Sт2LC/`e;KL%=ԏYy1> oQ_{l9%R[K$Z'Ym5`Z)ORľ6O Ep<BZq mCnj'#bb&FqbƦc-cD"0]y ?䛞kU,ctΑ)R J8*V=!K ' VԹ8Y,U"V1N"D`BSD6 ``ྤ`h\m<g*X}|p<{s΁Lw9eȰx />|lXvwOUxx4+fx {d.`$Y: tLv!&@\D"%oJDvġ܇ZևK &-:?՗;Bh\ Otd2:AƋ y!H尴GmP ] .!!1E/>&`G,$ɏNܿ~[R4z)%ae" ˰㴭1K4)đl?%/؜m6 ~xC-xú/PKj.YR?.N *JaKa*1O@1yY *{'P44@|aJ$<~4x2k|V3'>x@_qF"EHS 'ڬ)B`8I(pK|h|e5M?)0AFsI5C|'sN}4̦_{7Fnxzp5]ގ/}+UU\{xYxj_ 4BTMLU$1؟)BX&A (1M 5咊s0:uHSk]{^;nbOy3`1y}f>F8J[?sVvJ٦Nt- TU +)ucv[1[(UN;hav݂ZU5C.ԧ k )x:v ĠR |G-vGX2u|Hvk|]4OqvAbP넾v;DSn݂ZU5C.zxFhya?=E:9h sѓcVs\uT.THښIϱ@Kn9)cG@ WCKgr):shc̵uJߪ2cb03б`aNY\*̭<%H'&4!I"\j5&>6i 9U9u35NpƷ;\אX#,h;Iڑ4lW˫)w+E~LRW=ٖÛޗzm0 tZ#ә}qO%Rmf[t#2efkFduhAr'0qoˁ%(Nwr<<}[qY奣JN]\aabJJåb+2&&21OE 9"yepO|h|^lX%w l&2mK"i*B>{jVxӓv`{J (c/<@Rutٛ{I:k2Y`5cWCܐY#1|nj1;,g@J#5ÒXz k%2$aḼR K _jbx!fv)秳ET4tA0 Jey0RXS>hdA]g&f6? jn+noU:iv!i͒~ɭH侂u{#Ϣ [+Pe7PS50Lz3$_\!9Y/2NK #aXRb4T edԦ1G($-+"/0Ӝpb"p:TV!5\a X-!D*OؗDW< g󞛵YܢOɲWW۫0SHP ԍAܬʌT߽}o9|WZPI?dOìav1/F oum[Ej^ NyAb!XL8h{q?\D/?R2f}뢱0hإJwYR6Ba'>C6_y#sg:q/:Dv]=3w3 8r'x,L0~qeJxS+e2_Nofn p'~%\PT :sqT.g.…[ uaws^@$Vpqt|wsBㆪfCKVD+086/G(SKѫ.(֠a!YnŻrTMgqLu,%́*u$6VQFRFʡpÜal;j"OhAh؁  lRl$)V)BD<9i3PLQo /p[Oj^Cyn.U .Tێ"&Q""mȄv;;FAhj ͭU |{0AI6;c3T r-I1&N1N,P>i 65 /6FK&h1#ԛdKyA5@B|4ane|F9m zO~jykϕJ,êƬlW=xKܹn[\;?a0PBdn0o@Olc! -ROw.r2NI%th-6AY)rE_ >`PwkZt; DA>[|X7sBx;ňh6r4'c i־@;+]˝ZY [sƥy{uGy5ixjN?~nl I1_kp%fസƺ ]ܓqvOwn>RJ^)˒G;d E"%$ƺӠ5e9ۀ<8}x-b Ú˪^8r_dz#Gi ,\د5P s|iJA2# 0XD"b̄QDaKHa@:WCk5Sͧ<"' rCF(K5+cZޅݧjA~:R 7 _TQf] DywyA¾5!M|8sqN sNY<Tʶjr(^J+@^}zh:?fWՑ{sל9ON4wV쓰W6LAueaP=&]0`;!wn{c G('$b I9haM*6x25+7J '4[>T5K4ÇYѳy_Mf'l׊]Wӣz>bkӯ_-"9E<}f$VƼ6Xۯ~4!MQэ1zd~oo&8D ( tB݌fjn$Tgm?J1+)@7wSy3Z|nƲg=*''#ALo[6(1` 3o¬5۟wf5䪀 sk70ɝuxsLQP~zb]M<̒m15'nKL>}>EPCxvQWՊ[e)JG"6w+H1/E2ޒhJ+kFxdi"W35=u8z4OgsRi+Q]); ORIvo_R4?P&xnfQzu*SL;%ͷDmL4CbI1[Yb7tz"eP5E``z6(Jh}Ha1mbӝKPŴ$*PAS[M.k=ˎ+lW6f%5u$$ݙ>&qtSL9n_;A_9bPfxڀޔHZ~Ҷ iZ:! \)ƒG;b B̐rبG `5eHς}ppi7CU澓lVA/ܾUpIvY7}uCL@[#㴲ULtZ8.8bMJZDB-0K ;*=o~(Ѣ@ yV[=Z)]ǂ܍0܏+Xx@ACf* i3ò6:=n1Zi1ɔm~Q68eRt]^t1*d֭M Knڇlh6fH{"vD0Db#l`˚G\"LAuI6b% 1% L*Fh T@HLSdDp$iF(7_DZb؈lІ7zZ{\b[6 SDJƘjJN=t£AdG$DQ@̡%p݅j@)6lNtS쁑1Ș}o(!&vsԓR.s'AԻ_jsͩ=E M@~{{ۅ)Gsm۞ }ւ[o>^{xHD3bD&ڬ=TS*MSP, ]r(QIs}%A?/"ޜ^{ZM?7'%]@@5oz\.~WJn&/!Ev[YōG}+%"4.ٯ!?) Fac8CVEs_bR` }$/%B"|8; !\~$Pr˭vUVizbcJD"`yC>_7/yi`JJFP8!,'Fb"gI8/Y䬖7O`iϺ֊&rvٛDB+&dF/D&s$ys1]N )c2 dC=G/JJg:eaf] +Cfz'8b.EdK&޼8]M1RLJbۯl: \n<-Vqn"&~9$1v7;:39ՔaϤZ'_,O6(l#VLqQtLRIR <QJS^ވ5/\J^>jSCiᦓ N IAQ'!4ͽOZܔB@].MeEȂgqp˙{YD*qQB w-Ƨ''`Q-;~<ς/,ʁ=mI1_ݓ<<#|]3ONƔsBC|] πN ^j9"Wk8AI%8 o,O"u$NNc)]XKi_F)4f3m2U}a_j fX?8ҫNߌ?iݽպ{uVZX8(8ȞXTP*B(R c™yP)D@ѿ-?w//GoKFt|_rds7m oLdfss/<]M7Vadv)[نDgZY~6cv*+1ӟ% T1zLMjf!G7)t N -YIҸ]Jg; {, 8j:9K3QuU#_ª9N{"7vɘAuDS0q\,V ?-Qlsc;ab!C_WR >9w,)m7l[>zB08ʻA%gpl1G9 W )1f84w 4ui'^ 9aD)N# q*R%q̒X'8IdD:b(b1-9rGٚv;K "qJ1uRkMg[?ϾԭDA Fp ńSr IXa fVHJG-yn6s(W#|yqB(<$&k,5 QPR50Ś3cN2$\kZ̵Ø\,rT(L% "Dr1@7gTBe)YV(B 79e o?xm?ۍ>-x6Ń;{R`2m8P$81DF$D)-AbHֈ"itzWeWO(*)_\ET$"B4!$4{wkcsHQRh^cISe$YⅪ0seT0$+`L$)W166R@ *DG @sUR"UcP;75R"*"X=Da3{57׺3ƱMo=u ,ݺ~?#'B>cB:]M-;>Z A}GX/@{fR шHtJ:QP$@nz}ݣb*qcdEMF?$P0m>Lۯ+`h;'5ۘ'}Drp"l4j-`ډAe" 8Wʱc9$J`2Hwm:Kttӳ%*0\H6 w@b>/26/՟~G?{CApp:9ZQopzZ!iׄD:B"'P0Qu%FTa4T!TL0!)1T3H c@@xV9d5%9c+"QfYtiܳqDz_OjA'a>Kn|rl_cbVMG߬G0o|]̦qH K=\ ~.R A pz͝sǝhB<۪y~T2=Z~ӮQ5z`Tӯ_!q$ QO߿Aond1b `q 1_?jsmfColw8=zxw0& LO'&z00a8޵5q#VΞp[:e'/I0@f"8n`HCq(bQ+Ѡ} @wC~1Lܻ|sȈy @X{|x^*K$[ G Q89Pj04%-=N=/Sq,X@PRjSZo%=$CxbO9Y*lr7wn5}v6x'|t;л.EP**^^+OeDU,zQ=/_.&߳ B[_ 'veSyD/D1;c%6> 4,r"1戶z@4K6jeѩ3JkgX[GmfT8"`FJ)dېQ=[ᶫloFN%3uGCXvr|O'9OůGk'NE{)) z!N+b)H,⌥`ⵣ8}cTQ FxK Ύ>8"( qE¶"l VCYvvRuLPOgJ`c 6b7^d܍ο*`֝DdKVaFzc$G^Wl9(*Ѓabv-(mP켊c53AUzh!-#YԄ*j#%ΊwQ$ [F)r+ xx9n+`)T`gX]}p, $;!  r'8I([e0eN2 b(o`12("j@NEDbp(o=} 0q3A30 -69wmreZ+1.Fvs˳Ԧ Na3ɥK/j!pc? $}V>k+Oʩהuџ}}] ܵK>].b˅t`z<C\rq5~ӥ8Л0fotڵ,,ϊ43o@ !#(F #͜Tn# v(ߍ:gpy]lӄ-Z6, {@9Bn N= G煺$D"HrgHq$[-<'K ԟ'wb<0Ɓp!:!* Va7JeZG mT$׍xzc\xo idqxf{5MFDiv(Kjgz'WSުն$A凒G`;[NY 3L]#m!9Ѩٻ,5Kgto􊘍iF6p35O&O.6=|/oG{z|~e?=PFk}*ot2f5|iA-~azc}nh]e7Mw fx/sOb1J2l#ý>|:e ?GҪu3DsQ>\<>k;"͟>WKaíX*ED L"r_i+46[PS3<w;2N!Fo?il!pFFqto㕦{F4 7v(xZq8W!0!!pw,2U`)iIVƽW(ʬpZBJ.Upj}GV4wӯQoHQn O1a( "RH>aQď|S7"byj{GxȢ-`]^G |~YL A xBnO]j >6ܙM;~ɸ `>NpEߥ!9`( ;-p%TP9GoEDաz0yӍbJ/Bn7׋V!JJxД#ձvJ;E]ˏ,l;׵چb X5}!^pSر wJ̝ Knp)w lz7IKzqy}Eh;qϿ( ˾QS "آ7 X`E3jxGzLڵW*I"xԔCR+?~}MWX9=kskUԫLb QZ*'%wnX%𺃕꙯}D;OUDC0 DS J)N}[G>$U{acCxzb~$=m)6ׇ#.CER ֤?/ Ec>5ycCܔg)?}ni鳓g 1M-g3cјrv[KjW5uӗ^_/m_y )bdՆbێ jQ•LkC 2j8KQR 8`PjB uI7Rr$hBaS6213>k[ !*2C9bP*H( H3%2e 5C*P dX]n#G/F;r003.x~hȈ*vI"-R*$$# %6.ILF8'=WS!%y$"Spu*zrejW ,%r !J3ac5[hJ=vS'k!^owyan{I'7v7$7!~=9\1Ŭxzkjr~}P Cӷϟyn0\y"7[LO3s7DDHR\Wf~th2eg o.O[3}Ȼ<0\S4@rJj8/ ++OQ8 BDS> ~/toiRa&5Ykn42J`y;jw6J.ewy-V/:cSe1(o[RM-Kڒ*(NSfauix1q2D0FԥSXVX3nPƾ,g)GlB ^&Hpa˜XK_+w[SUހ'Z,=BJn7N &m7ЎҦ]4DX+%;Vyd^mC&aQ 8,UrkSYHpKa ehpdiIaR4Bs ~#YmV ;o5kq&1 N`=V HfZmm/%ގ?lk A2u^T iRn),`3Ɩ0eoyʔzLXXxq=`GQpI޶?/&t%;D['g; ʌz*`R%sѫT8 z~dB+$_zꃴAnO~ /pƏ#{?W ԝ4%Us;`1QO0nA?L%zttzt"nIX*ܴ"(J b$Ls( #Ɔ#G4-*O(מsdvJiƜ3(͸pF5wn-m7xCSavtC~ 0{|ߏgÇ&xx]~^SGt6LFA~ G7&xFT_% ~2f<#Ɵ~Y+z_ g.ʅF R߁-}^ۏiGP)k Y܊8kM֓\ފAu m݉Y\ ־0j $jHS̕"1ayqjSH= U3az Ki\M nOJ S;yޑߍ;z P-ohcthKb3& #:`jOC$#\iGGD#ba-3VEx @< (` У;0 0bSCQj+lh!ut67 AL]߮[ixqU!!kϾ S: gzD |&cnyU(܍4t7*p !ҲL%߆_Z2::[ѝl탟h&ZMI J$(K*s(aT)<ViES +~4uȞĥ ߈trk< GdDzBT\18>=H 9[nB!4 }yo7+^ǛBLto eEd)u]0 (u>uw 3NG2C/E&(g%LW(3ni~*I0۹u+Bt?qpl!yOTx.4ܧO:$j'p)G$5?Ϙt^QxxZ&RLĊ# xLH }RD %Lr];ctK샮uҘϾbl <ʻgo_`]I[Rۓ=OkX+~I7漐!3(B_ŸaϾE Gy6p׊bҽ]=u *{%i5 iji?ʞǘ{g7?MEJ2' aHs֞}w w~?ls.-h ڷ{@7}I!yAxt?mu:۲3]ۧ\ϐ:'ȶӴ71>e7 [n6qA. uo%gbMzÒu8xH88shJq2(w4u~`wD8XB!d!IJ̊ DSƒG;'Ć) 1}pJƃӷaA Yƒ`KE;ڭ[Q< 3~<b+kU\KV]>dU;,lk^kx0k)>"k\PΗN~?n)/yB?o?A rnr``_Og>g?I{ 7^Mp%%,Jq¬:5pJ9e%84^K bcT>?_oCذFrÓ ĕ T O{8\FwzADR-7`1UVnQ0b#Og0e+hD REJl |덅M2l:;Ϝ9{WDxký/Rm|*AsL8vAPH`I% "D>ZEo>{}vn- #@PvTrDծkZUѬ>ו=vHg֗cUVPL 6͒tb7eID;1BOZ~k򛼖 *[X8-iL4&ń2L!|REDJjoPBM=8^VVdtI:ā){>t jMqA¯K=w:;&!DC+oLc+a1-O6dA@1h9^`~V4;k{Aұ ݥ,,Ss? L8z3:x b*WSukJ_g<HZ=\#@+U r7sdONdmL(R!VJbީ7i9$+y)SOfH}Vf[9z\:ATB.)rIZ9Xs:A/g+o6NVbVj(Mι}((z^O;Gkp%&ԺSTp I~ ԉǧ |YHSp|i|O =,̩Qi(153yf;]42Gٲu.G|nqu5J|1IÝbrIX: i6B*iXj\dfH!!I?i0{q|{}po.Ɖσ𿧛mḻ<#(93@bҹgbC F<3in\ ʁ1J/>n3{$ NE:#:#eI<&Rc&"٘ aLd'^qROoƭC)Ch"d0H̔cK! 0a" EY%U.Ooaz8N3FY>Zoڻ&35x!:-DEqDi4X" #' e G-VkEAX1@%S\ceZRc^ ĀT`-{bB5laZ:?q y?TLUk"mBM矆P "344 'TpIxPL~ Eː~sƽyꞄw]Ɛ!W A>P 䇈CK@A:t~]1/nu0i.ݾjhhPmx}ԦB}}ܹe/2{whϲ B2zrS=c=;RRH0m8Hb7~U(TL\'i:U`c8Gxj`*Zwu)8LS*eh]?7Q#m;ER5kW+vjpU#_P}E< ҈wMu&Hܩ#Fa DSJ A7B9C\X'UVdo6D2Vgc/479U{R >/UOWߥ%G{dVsc |6X?&Y+i+r;w#XXi@ R/tV*{m}g{ާߟa:\N7L{M*[Q]R!o\EԔ>*n?֗Z3&P۔Z!b#sn[ءOj5*tU[bOUǕg-Q8#Mq*K"9@xO焨a\K \# Lk&ƙ!I&Bg"qp$M°K*h ԏCu|z^:/a5iRXrm f0FIOC `#D j,@Jʮ=V$C3 nGlHU#[+MzٶsTT`u޿\r2x;\q=7+crλj$0&wu>PGZKzθZ93wU:<c}IV:-_Gχb\Qid :r`nU17^3ofxͼk `" NXFL;Qdt5NS'(IyFrpYӜA6t5Zח`Tu՝͝?|(* :~a<_V)FqPAzЭH>Ov҉y(V*(6hBc D(Ӛ1GD)L1Hpv"ZiW |)thҒ]!:mx:QS8Z`sB0p0k7G}h)Z0TR$f$kK NP:yHK`1[~7 fH>Lȫ/\BUQ5L`LhRXׇ(zCӣtQ9 R "X Bs(`B==SsB׹ }4嗜i-?֖T!wmekA3yuqH F+B&Hh`4qD8۝miKb@&HSo0QB,LIrXw+ 3Ei?)j;ǯTab(ƾ)0XsnXk#("a1!`F&MdGJf(-UAvuj ->HA5>.tKGN"!±psay e;Cel/]iYgHWK&dPf־K A"qHWu YfݏZǶEt/]Dkú~"0_Ld;D δؽMw?_)!^ƀ!_:twq ?yx7uV|܌Zܧ3~Reg2C3?OF?fͮ[>?NgG%]f}'#1gkzRb1^btJjN'Doh3ò˂G{'@X̐hB+= r")z&RaIrtJ4N=>`s=MַyVMgtܺk. A  (i~B8}$^?~74x@=Ocz4z& &?U0J5\Pƌ$T K z1$F5D/g.c0(2Q}6Rԗ(&ȐR 6fk+9}a}m}U5kC}R,cW`ڶ'A*'`8T{U~}j]Ao@h4@ sLc 908}{t8ho:;p׏q)&`w(P#ݝv Рr!CͿߕZ(uц!=@*vYG>> agFow2㽅a*n狫]eDz޽9iCWvGt zzFO%={*N`>tza_|5?ʹ'DJߺU9% y*HrJz7I˕v٭!AӰJdgƷ+[ UNiyr71JMy?m8`5&7;DUttV'FϫH*ICPcJ8]T{n(]1E>txe;cmR|g-Ɔ7\`U?FYY%Y./n:T\tx)2G a ta[!YC[ҮOSñ/5e=l>.k/{ q*@oͲ>Xf8<@ّ< b/. #` bXW~C;-2<1'VHrvY$xB1st2޷{gob4Xܥ?{|ˊw <\OuӗA>o/t^s.sXMXٖB抐*l6$G':qBX,xfIKJB&4Q 1؉݄K\:3T+`T["~ |UX)Xf r_*BXZ)\lWquc5X.(Brc%U&2ň論^Ya2z F/VX>o py&(RG5S1R w BS!8ݝIOAJYO@na\d=pG*b?( 6*QG_u4t9Lj5E{=v{JXP8i GpY3"2Ǣ( .XHj!Aq; qeRs,17oC"^Ez01LqmsFÙW@&(;`2|7tyO$Ҽh`y}L9ԃq<J!\hf\]T0[Mx\"L{'C !Fqu}[>_ܭVKLJI.35vjjϟHCg.^m~?.4'hnnt o!S"~r}ZB5#jDdh̰c0֠v/ka?%e \As.p O4 gz_)K~ҝ`g6tgeؒ#I ߗ,RIYbU6e"<&kTue19v u^`ڄdl>=SfE _!_Y(82E R rNО7a;v°%@"Kl߭Js ga7롔T@&p+ubCu銦pQŃz :`~4Qr_/j4ϐ_NwV9nct&M>_|tgNiJ@zƷ9 $0f-\u(뫃e Wb9 C[DZa噳^ӃXպP!" PHQ6hQ ؜?1wkBsALB."@dKT3[j`;u[xt%:wC.hNgN0<j+Y&ıƖi'UsAyB1>gֹ?aŝ@!D3 $,m"ƹ͡2P R u[*nN ~xq"?k7PtEѥmneclv}1{4C(koVY77g3}#[7[W2No.pXh_c07q;S;}ʉyEa,Tq4\{x$XcmT/ˋKYRX9obPvے|"zLQӻ^c!R1w4nNCn?"[hL\z˷D)-I}Gvi颥kvڐ/\DSd\}a[=)FCnĈN;hxNMgnmH.G˔"B+m bW]z_(1`31ʔi`Ƥ:Thf#"FW}Xy}{%EK$删("*Ҝ~(r /j#H$vE>!d6n->3,iiէ~; .=.G^[S )(/E@QߛfN}K@Xt4 Ws1#_%W[E J/G_Ty#A_ώ.K$aE$x ncurW3pRv~MT~ rRDaK J%l[_3/aK0H$ ` d aݦ|w@q58cd"Gs&qN@$H㹁8hekCsV2$ \ KH#"7o{^N羧txW|Bzp4 T닫_~V j(uI@l9XwJ;`-x0Q4@L)UO"cS"$61y\0¼&N1$"dX\Dq|3Ps 0h  -9@IPc)ME$r3\~pv2(PΊИ!d$ ?ޔRPbD!IP I/]양m!m2 ^_~4ʛhA%nfM01o3e~^0 0NȂժzzdmuѥ_fߡwo~A.{#yc@-8(+o\DQ>+yt?V}Ǜc"|ypgH8D /Fjf9@sRb")LG/|rH &VWU{;!kk]"QAwэ>[(e+$*w GÃZ})f@!`-x*A;?16@0d~S qW~5ߍErW 6O|!iFӧn:x|#WwN/B<(b &~'%7$Kyb2elݗ0,ƼSӅ[;Ig%"RHO+#8 p14yF{vٴx?YG0R9Ӗ |^) sȑo0_YTqkeko0GJm%P\lrD 9 y Y+dy(-/`|PAZex mLkO&"#Y02%e"SŒa+4pZS#.;ֈ1p*^ޮ16~PQP1߀C+$;j%%/+ z{5Ҙju%B@cߞי~cIypJJI׆FkqlKBS@#ϣJ;n: &#s1M"H| iOt; 9i@䏣o ɼ lyݹ??7`Th) r~?6nc|r/+dgwMPz7AݔުM:&s! RK<r2 Nlգ|(x~3z2y϶-/z3Z$d Vwnw4/?Z7콊m8z <ܩRYio[(l7R/@J<Ӈ+<3QV?h%!q W [D;vo/~(~t; y)uq՗܃)nUj~X#@TG3Fjp"M*k+-`/?:5{;mR}dnFx{ZuY2S-+<#7K4d^?ݎ'_ /:POzkzܵ|U?;8veXp5GZ<$靺tlq![zڪ^N$dXD fޣ'{}BIpߨID,z~dʢYco)*M͌[ W;%I2 'ݧw `"ruH!&=O;:lA̺3\$PVcv9†tT`qBJA@XC-Zs @6X"dTJ0+qb +(_O(1 ^|l S!N宒?d8gdqpb3gc&ϩaBQn V1E2s8Ј nN~46rLzJETYzrHF)ɾa;̓C%uՠpe|OX5☩hRGNTW:}-چ|""SGaǞv#!hT bD'M -ihR-->R!!_&YbZM+jeB*r&&jWN`Fa@y3ÉZcZIh2'3SCi`Ve0H<أ&?OZI<i[DXveI:9NM0tD%+Le߯jh YXK+Pv BQX!ɢSoJez ]G!ǍKF vgK,'AV}:3_.I%5X7JnJ|yx97Ҵb'Z aCqu"ED60t ]c(^!vi\ j>|izU[@\hUަ4'S;R&(.nyO|= %=OԄB!gHz1ʐgTrUi508K@o 07u~#(yez0H5%0b~[~/cb7C&U$*vw77Apg{J$3 Fv6.{?vUMETύǺSɠ^%[d#}Qrü">Rիlvڽ/6VnnO7wqhjA8[谵E!b埭Grͽ :}LQ rGƼ@da3+wgpL7t<EIٺ,UxyfYLie:aqcЁ򠠒Z>V^MiGX*c|r?t3lP9񱚳SZV&Ss6r89;EUf|ݮk?I@% 梅/(|DŽ!/S?=L&6N܄8dc<+~zp+؀QQ|Ul(:=pmB!'(lUzHҺ-}݄-7); {=m¢F8^ЄK\59?Ձ}l͠'^0Bi67Vو՗SzCE3:v?&APn,Q7)2RAfYYjp?9vjq)7HdB܏OjQ܏a|~|X+ɦ<33'6wo2Gc[b IpwјEݑǼ%"ݑ#`̙F#|֋19r kV7ukD$]8;$qtgJC65J1 {}9;X$lFrނmT^U6Ca ܔˏ>Q엖tM+.LuBWpОFG^UĖ-E! 0.rElC%Rڅ^QfSMzhG%sQKTR3*XCK. z0]- `¼eeG]UN$eskqƌ&P+qD)OuN8 F2O^ UFjC}7N_]R1ML@$߭FHQ0Xq<3V/(ZI"L=*> vjn/5Hś˯ faKKvH-U%}ԊIL Vw46m JLwۺ!m?hGtjg/e:Xu[ʘmVySR*QJqG)5.mAO_܍!OWw Zr]ϳuyX9b^QֿsDԄb@]""|?/(|Ǥ!^V{ -6VS;jn5/4δr1IL+wɅ\MptAWz" p ;p<1r#:p|AـEo$VOb]eI4bTk2Po% ;},Oq\r"jYahr-_ݽˑpgV(Le'yHSRgDp#&I: B,!gƳ%]]1#Q3" 4S.Qp96NrO4lSe: WkjŸF1mdǢ~kFJB OUn%a )45AGS2aG,R˅tH*$t+{OuiO8$vČH =?ۻձ9 .ht%Мu>]̦rM!QP3k`/4*y ΑPP9Nxs݁blRCtiX1a{Pqy"G#RCN)긼׍yO4eOc&SAU5P}fNV)QTuG&+5ZGft"$U3hzrMvJwq&UG"$j6UO07I׋wz6O)8}XyKlvKzfkůxS[}{aY߆߆T-USKHRN- 0.EΊko9w` Z0JbT:/)Εa\^9x$cL *Cj,.ދKSxzb( pp̂rH^9RTe-rf}I=$H(+4sӼԚ2)H%mV1tIPrY)(ֿ:B@0 sFJBY'p迪S{\Emga `T+maaSv4i P`& (1ўR'|A$|v5V6k_?8}kTdpL[Nva!S ߱K΄p-uHTi6U=dWM+s^N7 l}{JrdC=sU /H՜ 8Gbբ.gMs!a;-2'9#={˾U|w] #'9`)Ee!D5ZSreHYǟ t=\dsk@P8Su[J: ļPgx%޷S!^|ŧ"U3Y[-p|5xY?x6^ ?/>,WZ& ˈRT PhKzc\ )BAI#I?>_+Iz"l{BvAXq@-cĘ,aĥd2 !T0 HSpE3a n2Xˀ#"a:fm g5BiH~@pRlXvwdž,H`%s*h$ @[XeXSIr2ʵ ׍ Z;_]3VdpSF%| հCxV*b7`sťܢZɍR(+PՑYRr`a-`ZX8ZDbTF<]l@ C Rr& WnR*y\Cu<5ClZx4GE_Q>5K).N5w!{ރFq'M#ਇ5ޓX7g"7Pyp'^c>) (8CsgE QD$}̛AD2$no JxwYӢb J*z~zp2~sB;XnDzX%8b!$ ʨqΝH/ggU'+ZF,o&u=JJOv~Dm(ׇ܇W}OO3-\?O]Rr J\1qAKF C)k΋_z~33THdዛRI, [8:=@l6`y޸۠R ux$ ,i@{O/}BB8vmx,bؕl^Zl+,nR%t槐hbQڧKT(;aveMOkn^QL܎ɨ"u j'6:m;fpu%(:`wH~*Nw@J`DAmf-̄ Fh#U1_v? zy=(j_ qĤFNoTHQDo2 Z~E*i-'NLw) ʙ:)c8XN8 QDyHEJe1Ԃ(|ƿdՒkoSq12&C6"!NI\^0KraOZH3N8@j O,Pe\rxAszcc)m ϝe  cZÓ KC.d- ה@&+KZIڒ gjI^LLȆ*dLdA3kk^BTb9Dx;%5c -сZ"M!~};"s.#Q3Nl/T*oeu}~tarٛ:lm_ISa7{6sv| {y@:l1Oq̯qy2lno߼LH3{e~jf܆o-+ɏrmv3{W|JJX30Wzer\]>BvJQә%`3AS:+eqGֻ7o[IsuarjhzSDZQp֦j %lu K_8*տ,\0e[ǂ_xU 2 㚅!F @ͼ*(* ox v2~U 7Sxqez4׺8 r>^QkeNϘ7b˵]9 ZcPOQZWR9àflݧȮ#Cj%&g`&qibL #k"Qx]aWҫ*ʓJ_#$MmW+k~rE%q{津jɗRPoe_P _N n)Uƻ:C!}BUY%jX5:ԔR*]91;@Kxe?|B*EYcFy}*.]X P).^HY%j5%p[9^E'`ۈ_P68 \J1ٰ>e _kpݻ˰ ѽ[/6 j|V}Z=̫hǜ&"IDc"Iņ02Hd!<"JP|>,AxYSd4tZ˹8YJm v~΃,cy?jUfxR 7n~旗jař1cQ@FRZD K̹srR:T~T3$n-T//ɶOF, xh6 xh$$kQTQ907 V3 g1FL%` @ODkxmq8)-ɮ9I;JJ"L2)苷6b b}e7@p7 C8C*0l9,"hxiП5ƪF| . rne& Vl6f41Zh*cFXHXXL"nH:HP $5B8Q\QY jLB1)!0"T|n%nr]lqFk#4 ہm i vB6WaLd )ItLEFiD J ǰ!3H!Fh' LW/L Zo dZ> mMIeW֪6w{6zڋY4S>_tKRℰ&ߑwo?=0O_OG3 ADH~ |LJx 6=nwe?3]Ar2-EY~`ͧ?WR(|P˿Ǣj dH *a$ =$?s1.b5׶eUgcK Ap~w#?*,꣇dF(3Y6@e * E/նjQ9Yň@L;Zю -X&WEHdgy8=S$wbr6/'&/cgfSdu=.hyjsZ8K`yF;)rRIBTa L(!V]*2;wƃˠ;8/*wqMLv(םױ\o|LQ'"Xrx-#2` aH%I u vYTDD%\U.q|gSde'[=g==w{w;H֧wXnFAuoӓ7``gp'k K$DsofRf X F) &OCd0JSYtjcaH?D*1|%3 ;*Yg} ԤBAORb`9U_U;hl/eNU"QѝndprKB6Q>=ƃaFG[d(Qp5^hɄOX"m|hLr_,h67,<BqE0I1XZZJM*#Nx$PFNbmi"wS=,9 NdɢN5׊t4#(&,aܘ( P15&aɹ d%((T in IŁq#΂08aL 0pՈ=w)2cH]duaxF2։B(Fv<q( !H V ۬ #QFd$e;ƅIU)m{eå[֖[h0Ff,.OK_1:]an(BnpY3[DO ?{ȞMLdWi` 'pJ|:|43vp,,;Ք|L~$BV67 ޞ8lZ6-y6>k4Uj[hVmS`FIcӜ4{wMG0f_"ӯ%\\Ѡ(J=KF %`y*'H0-r@"A.emΚ6 [l%>k?(༌~wMyy\w`#%scXbk'REՑ|*W?qUfE/TF{/\6hǿz9@bTI>\ H^+"!X<@G;|Ǥ9VM:ks1/5*mBQ.dGOp2yM^ .vü{}64Eǻ*ɿPA>: hii f3qYوkn̢DUΈtu(.GaF^;;^G+.aB$9ނݽq}¦yp3x۠ҔRܓjU7UgwMb}P^fG$LBJD:ÿ"I2F1PBBG؈>Ӝ#K|.U @“f飍4m7/3ߋDTsTS*(BoXNa1s:Kz lo@.}5rR\eۏ`lZ{G~u8j/՘y\ozk} l9~^kWtłbxT5x>(THK!̞ ƻwjE/6q֙ 9c`(U,q#Nƾ<Ҷ?2ބ.9lW^S6Tpό5w:m?,~{X=( +u5I+Q'Q=&X. ҡvsŠ踎QEu9Uʦ{ڭ y"$Sc_\XCAqvgnuH+Q':OI˵PbPFt\Ǩݺ Xʽ6ܫDnuH+ѓeJ) (bEl6Gr_PI|Kki[u*GQ B2e^Ah~:s?WJgsgWJ:- %h{8:B).;.Y%W@Q@_?`96\U{[J/:N! bJ1eQ}2VBL(`)hmk KWN`(xS[% 7%:$8%N/(!{9h>O?.FዺBkjF ~y%Qf/>A-їxuMOU- 1UZlz›)2Пû|~(ְEf4C1]fkf tflc|򦳧k++=b}9iH3SC%DOz< WC_[˼z5aWs|/MjCQd°*f̪K|E>}iـ/۷ܭ2`6~vϐeafF$Z̜Bݧr!@/J+{OJq|wwm-?ػCej*@ $85!n8A/|2elP.'$K!HPHDbd 3cnyG řC'4Vx\P$@O2A.P(?l =Bb%9b9Qyݝj:B Ta*p!T+ J$MOyABJIZ5et" #)sa*aX*ԄWA@) p!."  "C Tr h+1Xb¸U!@NBFd5gkkFw—B*}zs/qc`1Nzsw#:+)M@: vŘx7& KU/ ^o_9e@)! q$Oܞ?IEB}SGf\GSZ:-(MZ Z,+{=6^꩹OK܅= uY. 4dưZEeSJTarLX<1@CQXdR,lڙllbhFrD^k߄gF}-N_L>>>H0D>Q(0@Q)^ȃq5K?vg? P@CoQ!d3sR/~3{ +!)@.Ϯ?ѓP;c tTh@%%D@P ʉ<Ea!sXc0&XHSIx5Ng:7>=1#I ((1z(i^VIK?(W,))dR`NAɾ(j+UPp" 0K $lS4 Ѡ c,q$51Bx;A 2HJySJ!s0@h鬚06@+T U CNzȩB*"%P #r(.QĊV9DaϨr'/tԩ^6W74?S7\N b6 )J/ m4L額0T"@7&BO[dD=p}':=@ ^ً1畣+c?[1aȅ *3a'tAF 5W&g`3gM;jB0!VqX]ZEb;cb;p=hz(>Cqzdnd Ou]>Av5 ټyX5#c|v],0q+=[Q+qXފ_ ELQh!GTs!Qb#: Y:Iq[xw OCBp ){nC{+{y >Lȃ"1#\ (8U jP |otF<(`8(!!_):b vD_B9ňNYK:a1Z-qc֒NIV4d1x*ʀ0Fk>>mRгK:yI.Q A}dرLJIPߧJL9bfHH($l %i.S=/ĘBDh"P \mt~ߚ.7O";"ǏEX*ߒB |aȶ!G&;#6#䈂S FH[Of2b:p 0?n FۥV6~qo ?ο1\Vݧ_+!ik>S?N/7X_Apb3=z/hM275ٿ1 6eeuşXT׫f'3<,׷WMۋ@^ܼ3^0yX毻c5s>r bonoͻšgyGͮ\^B oeΚ_tn;D&h#M}9XԱ v;A^x ,1w\:N˽;e0X F`E+"UoqVʜ08M?14u' 3rw! |tv:&1bvI2:FsܤӏcVHscA|A`Mo-)2[ S=ON\.xUd<} 9ȚB]4+q'4b"N̚:8e5Hmzio$`$OVy 9dp[k}9} SAqY7h%^-/Oњ  <7t$Vzz?^@|cҐt{Vt )b ::[ۼٯfO焪xpĞp;ehs쓱Y(QuD6q.fjй{bW$Ľ>ѥsX:;qwW:}vI '&OUdO .O! 6GʨgDcO &f8L&3?d l&#VN 7/o}Z959B0$dYIx~ܳ.H`+l @!H0.1@FlQ6p,V*; Gd$\a V!AJ[ ͈y`g*mK*`FcEx.+m]  U`bűQoKQ0ؐ3:8`@5N)ڽ'HأFНV#_STDwb#C3y?l^Oh οSj״[|\\@PpIV2Q2Liowh\pDJI2PQ~jH#1!5,Izzơ&:ϪXc;q 3rkmɟ>51juwi5Wi[C9M@isg+ivc!rpx?H2'f}VUΰ8eywU+&vP*]ÅmmlE W^,jbu_[{ac?Yn׺}ׅf/mkc.ϻa ٶھ3zL{Lf3?l[kyHVzd+sYmuIBp )&>ni4v GtBE7<³+<[ E4H-S k7!$l@ otF CnIP[ E4H?} LC1otn#H0D s=[ E4H8FC5sf Nӟ ٟ;9ٟgroB̾E KP:90 vrמ0 Qw:C5q2Uf?abP<?Qf?a H>a O;v~Frf?!N 0%pf?!N 0bf?!N0<5(2AU&BfR X*Rj5/:}l|&]a`#6ŋ-2ffB̿w5 i"̽zp^^#LDL֕b"#7dvaB$s/4`ξ777͘gw.#,2ӫ7?q5d{L1 tΟysKsB;+":7F;DҹzOJc٫Sw[a[I%m~_k9NyCvǀgCiC)^m>׷柷j?AK Dmڟj i}%` yEcqEpa)2t\qcB*3t 8ԊÙ[kEȱH|^ErBt|t8 x1Ps`ˮڽpЩGjy?'^1ޭo}[/|㼘&gJ"^<8Q9y& < ^Vz >d\T#! i~fzN/JMFA"#Tx6^NY2>[*PCJgHJ` KDb'̆! Ec#<8'\|ԧA8-cIlsI⌻eoD^J##?ZpqFvvB:GMD@6f;VjaϳT;sj{oԭҫm{&_(R5 In^E-G.B?,gz8_!rNPz888Nn^rAbOr~4$RPagWU|5 5]d?ܩ;5tf wj%pC yC4/,RR(MA) ̂13\49! :MHАtseX֕o͈&kNϟ7s͘ʍ\ϊό"URjOOgm!9 JșوK ta.1 %aCFX9FPɂ;iT8"`PցkˤTA; ,Ͷm+je#Vz+ueٱjTX% !~&?kaϪ0Arbm4݌k Ni QKla1[VJV4SeW7Ux8'4۽ 6w;,NŞK5GܕǷ;~w5*ͫ~t&y;T B͌\ksq $7hrf|rdH%tLd(O`9f)ATj4u *Q,~bEG񂴁:B7Qiߛ#RUV( wƀ`g vA$-];e"u =E.ގ鎕E7oQ|IZJ"ҕx U&-QEua[D.gm)9 &BbOA,Tja5 X[ᆠFiPh^[ILtVXn5d10 ;ܳj*+|//K)޿?xDDni-e/cݗ1{ nc5b<+{DRj s3`Rr+ π#MTo/(EhX޻gZgTFK )1I (a\PIs*63-vxϚӤZ -)҅-uP|&C/ |Koԟ<خ" U.V5y/sߛ&Դf8g6"4DQ"[띋0. UYڑaʱ ImKTޮn͖[f_ijkRF2͸#^289v >$$G;lՏ8 eǿIplZZHبY姴\Yzr%wk)UT緾RCNB޵`Re $)F] J:5i:R(H4JFQ;V(WADNy8[ ġvAs:lN4w[!|- R/.co&8~f/vi^` @i92?lKQ$Rt~ v&gJx|}y&+fEOF2Nw}F6r KO_{q4LJswSǠ 9ŷiCd~N?=S?ӌ)<Є-'(vnJ+/ O348/5(B*9T Z>}bvݨ"@*-V JO:B]' 粟vMEڎGL ; sפTN%z6cNZ{:5ZwFD [Qu rr3AuCZWn#V%AHI`%OnWݒJBB3-zuʃϗ&σPd{}>uCk-K-N1}hws"A: `h\DB8-~9U 9dFY4q^l0h8<"ZOW$sKW.SmAHBvfb1d_)aOfH+Փ#N0uymMs,mt4KŶ>N3E h&uNkT.Jp_L,l!u!AEIEIEIEU2dy |pH -$"hw棱 +t8ʂfj[Q+/hyJv{xNOح>֮9D0s ?0 iGMkx"_c%ȩBOP\jyMGa@(KDr_X;')SrٲpVJ "2F NtjewQ{2a2]ٓ@m/B.mo/ _)uF;8ՍdBЕۏ }*sljcMq>H9L9L9L9fu $kk ڃA(畗V P@5m+jyѶ%Йڬ@+CINpxf UvX;M A݊"^K ueAQA`Z!1{8yqA.$wTEBHi͊tfXT,^RhR#(1\lkW`40۔ #7D+)NڨtDşILQeMx+L"3 gDrJ`?/i Jقp Nl5?i vv[LC)E5nKֲֽ; 眻Y/ܭks30^m$E77;{f}3DD݂Ҹ\ʲս { $["|٤ԔЄ2Itˇq.,7Cm!bbr #) ׭138ء֦[_]nP ֦(YX6}9!JuZtNQ/. Q;M3Y>\3?,7$)m=cAWoFNڑ Hkn zسӢZ=3]&glxC"hN%=Swpfn>TэJ1z#XJ=r3Jx^nakulEgb{Feچ6MU'.l^4k3I ^ahv], bdn ꁆ<*!i"TsshDLF 껖QO蟻j)+1d@aqb#ʪz=jxJCF ed$P0glcU8@蔊0@_^+N՜: тrjhgØL(ogtۙN~Z9+5? oefv.N␢Mh}Er΅{oo/7 B= mPڝKU,iK" Fh7 H'7يRBv/.;i#ʃ.H@t!P~FU,{[- 41:0.kzӒ]0t: -3.N Em >z{kt\]BCt:4 ;N$}l"Cwte9(v-9Z-&7s$?frN i:r%bTJ* m-~hu$8/d!DW*X"J0*+)A)a4qc8g,c1 ?PjXV˧>>uўKfPUP_HbR~GOe[zߋd:WWuIY1\kŷD%߻ -+Y\)OI7{suse՗'2 g+S~ [[+Pb- A:urʹ,oKJZ| Jj6s2 >=''{*M}8\<:(63tk S1L0j1/GN(p*C  “[53z'1DX"Q~KT}0ߖ BZ^ۊ{)&=pǴsүeo[W,SNXKk889A/LzUYQ@{ e.8/"B edeo$9 ( FF 
fF9=.aW!.Ǫ~o7`8pm EJ;{bɪ*yu)/pqF.j1b?$ߛY]_v~ RBbUɦ>A"_ y1驌uhƙ[reјBgtBS.h8媥p$Xt9=dX*~j\ r *t814I.?Fcˏe,;$dR&B4pX g\qjS\^/ŠET" :1%e.v|I,b !9 {gx @~8\kBx)) qI!<V Ug(PZxX#.kjfC`zT慷ˍ^\1Ǽt'*Nj]7#j<Ҫia4 FyнX "dxXCQ[uu/y sN̬$QEɹО90@jh  Œ6f^P=6 #[ܛ(.WSKA-ـ"Wh rac( `h؆]^dOg߾d@QNLS`:C$Rc+SY&Vɪ-n.g U@Sşw菽h}ExL B~sciw= mCiZsEyd\!o1YoQau}ŽTH&w!ٸhLܥo y9 S;u~5p9 Q r! :USDmidRۭ$5l4yS]F7D*^0=@2^ EШd#K•)E9tU{ 7:GG\a+)7nLm=jtMIX8ؓCc`!T[Q"Y5OuLb'HlBED8?: :1*x9 z5#ՂJ*u LѤJ<ݑDQnu 5Pڐ\DSd[SG+bWIt9S1#:c4n' \ҳjj6$+$R콍&I)-::FvBKJIo-vkCBrM w8ꜩ1hSHXo-=EvkCBr=^bفoqv>U!ǏflՍFk >|TeRSHӴ\nݻZ1nPT:Д%)\0tAq*&=LIs ɞI)ͧY,Ia,~;g_,A0Ƞ%%Mq鹝JOIaKǩٲOɔ6@f'/ePpeÔbaKJ؉g҂"ix,N҂? O:c$-5>eRDR71"1O|{/M('W{oW'nTā92}ݘ)sIsܖ%):bd$li1H=vPGi~SM0Ǭ^74Kn꼡M+,VFK_?XwHzQ1zhe{AG>QCIXM+^`uā 4 髽2wոvN%j6z5ߙ sџ >Ehs:XS5JO!?GX#vtC3eb;_ j)¸d+C,oR ć#7!}tizj@bmj FSJ: OK$zdS=fẀcMpRƘkΩ32qMrĜ#sΕ̙nBu Ta-TnQ5c It6 k3XtS-#t@VMV4fJu\ Iڍ\`{Y@hw iJ-/#*,eM9eGϲfp76aFl<9y.|ƨ5Rf^@,E&ԁ޷U(„ +gg v.Ľ@H;C0jϙy.YN]1 W0[&+ՠ_#4ܸ}nvz7SP ,۾XH _yqqŹgXDY̖E .D@/(71!'bZh.['A<%E:%Ӝۃ%43Y#2 <{GA"ƒAM e :HFn2x[;hw ]LO7Ki~W*7~3 T $ ,}co2s~MZ@oXo_=Lj!( ]v eOlm)fssq{ïHEaތ73 I l+mdq7UAPTD,u; (Q1"&-កsK%&e\JxacLv+D%jva/~dPyT!}D&J$Z`RLh02 U49]MsaH s!$ۛOiֱրbq+@N.IY^dI+\d %3FrMTD")T-Vr `2ƥ&(\g$OVe,)@ANdEe'y\0IhAK)rJmf*Q̴E)lf`r vY}|}v)Y\\_/fuz,xZE&ޯ>}xJD&ޯ>}xv VqG`ӄ'lnl'lulRhsahW6'bEPn4|[nnY+l١ܠ'X]/XeTk?e9@oh{&x$15%.n^V;eh`ƺۜ>I{lCZ!ĕNZձDKrsu}qf EQ? 1& 9EKnw5=?zsn޼nz{:Vw+nUjuMVD!i DW`,}z 5Ϊ1viRb4b*hQE@UJņ̃wV?5)mkͼyrxLap~Tt%;zs@LK\z{J)<Y%O|Wɵk|^O^d J}$AvᲧ卽Y娧|懟,Y.>ޔ#Lɰ%3~AL )] aOMsC-je>Lcd$԰, ĎJՆnp5~[.s5wWH"Ȑa"5ʚEP 4ήB7A1d ƴ p2WOTz֍XҲ$I~Z_ex.&d%`d; 5,f2 8tHcIa5ܢ&Y\![&aѡe*si[|򈃨3JڧcPlbLKth22Yg rlJ"@*E  DP͕43(H؊4hAwR9YW}F;̈ `z4 yGS nqEC+7}?8֗}۳UTCqjaWƯ|Z Ig~ZD pz. *rӂAat+L>Ln!Y7ՇKu}SAt[ U@toX!8!cUp8 MI 'G8X\qiph+>YKݙ:'좺 oVژ~K߶ '_8!_NCnT 3\d͓ETaE&)`С-19Wϋ c|̬Enp2!rsAIbs3tVB(99t$5Zo_tDpsZaM|2:3Aœ>+AI<'2e)OHԑV6E,T"WBhpW(R"r 'ڔMSAU;[tǮ󃘭ȤrOypF \ "F>kM,`S.؀YEJ/b*s},0cF 8j NȬȇfXU s\߷|) ;ݱnp㌀Y+{!gp-Q;$T \g/<x^ $R^blg#t3/VsbŞG 2H,Pߺ(!Fw_oœ1)Jǔ˽QK(WRD=+Z,;!ǁ:P;􂾦q/<[G{5&?_/$<&/dEAsh޹=Zsӏ.W@N3ǟ//ؿtՋpD+epWJek\Qd\BQE/R;ad)44^?^-weER,tW3 "PW זeV嘳G[U9KI׮B,}2ls0;}xР"oR/Qr-9GO*k~}x9Y>tMV94tI,mͳؔfUVjܕ'ܞ7zrxT2J}ɄHHD'$.8;d (0:!,f0 %ȏW7w}MB J#P:X\?m;xNW9^1-cTu^ۭmy D(MwhSg%K_n̥T<JAu|b@!֦\&dgi _[qȲ3̓\Pk _&pa/!BIW(9&W/&|Br v]3cavMejk[P.ϋBOjqR7oM5Fr\ZdR{FC^kRR0 <iHMBgmp lʯ($8 M2yđuUWltPJ ʿc#1y۽vf_,{ۻO3N\{%ɂᄝM||pCpG7} 3Y5w~ut.4?^^>,:#hӽw1$L FQ{v2q L{,$ ԙT9¶1ZҞM \b6;.kT&aƬ*.&ț.6TJހj*i_.jH5=#_M]cﳛ)! on427E;q }zphxޫGHu:ӨQD\PwWdj+އ"ۄfRٸFv<(vK#LC:%pU.n.NU ET _(1tR9XMÇ,7o Y5( AC1Ś5uZ=Jy& AdjrQ-{ch\֤:h<`~{IZ LpdwdhZ-ma%5oSmWq:Ral9 OiaE:Hݵ'(35JB (UI|vVd4FJ}U=fͥ$}>Z)R'9 @b\ҎVڸ0`C?).gH1m%cR2c&Nx0P689%N-#Ob$bQF{<XT'һ z 4cBH[0R i {bJNVU^Hdm,>YGȫ_3K|&1&.0C~g TPEi*5jpHx䔊D9g0iS0iV=NPL@XJISmY'L ʖ&-ǡ?g|]2VlSOe뛾Mx+l-6]dZwwY{tӚ쮈>4:kˍRlSޞlM37[^NM3#cKb ʙ$KɛΈ`.{u(PԔt ŭi剩j (;@o[оWV|=晎cp+=z{y$6ٿ F4k2G%2G1F1/0l(vO N.E#+٬vuM9N᧷UeʒL#6/S<xWo*a7JىPB+g/ucLhPٌi@=$ZI~.[Me'oP-u+ѬO@`bN0H+cJ1ۢDhy&LFԀT#I%[{7GHrԶ]#FQ kt Y2ʜp2RZ暑!ѥxR`Ad-"q4h+ r؈u5MК@3ʬx.ӊ، .UZX* 4ZDDx-F"a4l4Ĭ ӄ6CGX\Ұt% s@M3%&i~95r6Dxm]c/{YBDCخ-+]_큶UeX*<Tٿ??PeI!bTuy$Z pa`FtN%O .A5OZ:o}J[_iu@*L=SeI_ZCXUrh*֖רC2~@{3%Io)vX 2 Af4$ -9gm'WWȄ&ٞ!0j[&:A8*ɠMZy^>r,Ga8EBP;KZJ.MS3\?JՄB-GaX: Gp0эeB"l3*j٤XM+xCL4uS4b\uhFKұ)r0 m:wNr|aة 3xb#NIxrW/t%u攐Q^))%;=KMeϷX'*nOJlPIkA^ 3m\뽪ce@J=^=8%P՛=[b)r 4lj<{UU#tb?"0uf=hFLf0_ld+Ux/.Q :c[jp 3SpNn6g r8`4u)6ڋ-E{C]x ;1G-F8"ESx @0`Yacpԝsve ]/P¡V4vzgUy9S͋҉:z}F )>L1MfHp$@{ITT{?Z%LSWմ%Ez!v=ga?fO> GdcD7ƠƘ8хƿNAɩq(g~ iB{NlW{B3]HEw1#laxU<(Dkѫ^,};qԬܢ+z oӲ.*N {âqK: כN`@%^oܨJϹ; e5.AeUs R Z)_X͝5$a(ah #eVpQ:y5ќkƒ,%KtjQKsP)mx9 hpgu*ƍhG|L0 # $hM9+B;O=(˵pQ9شFsi$J9c\`U.Wsɘ|YSMUN}? #f'MLQm Yn+)(sV9p ,RTXjٜ^3e#f3WLL KHo8 R. #TR-a5Yϝ\ZLC&J< t\Q-^Q@a \ {p3ci#&>10qt*'Ļ%I +˜:-we$Wz/#Z!ߌՍ2IE<./7 qs$r(ϔBLS\6c&ts߈Ҩƻh;mW~=EC ~=%3wFm(Pt@q!d.-&VE)roXTTa#D+ B%Q:ghXp:*nkZ)P`[ mB)NA{+IM810Z,I(,Ewkax(\9S(;if"XܹBנ Ն=_Ǩ'%3Oy=uɧ ԏ~X~#}/#|PI9rOK뱝|bDozbD&/$`قK._$7s58 [7Jg52N% c gt~xߢ,t2-FDI\#nӯN?&>ZƄT;z;`7c+yܸ^p:u2@ T3Nac"ˑPN]0JvU/'] mM",k!zъ>^\JYPNFj ŮTZ2'Vj'}U|SO8lzz{ƀ_<0QRut3Ӎ:dsFHDìA:jߧ9,IS)ܷ['cD]{_s6:luٻ6n$6K |UwJ'/N0$H*o+68|bf8$_7 AݖuX9DI-psNgJgZ Yg!ȗH&6PrsQRIjK%ǜT ϲe-Ϋ: M2"adC /g w-xq%j@2G~w>8NS7 8`ĵɰSyA9<~>; @>KIz3ZZJDsSx$|C&̹0 ^fɍp2#s▅xɼɍ˲8%*۞"F1d!z٘< v1xB16zKSp!B{HPpV+'!v g a/v [XQHEA%3%?&C$a!lCvs"TɀEm;jJk,>|n)Z 36i׊ѓ""7LpBO1*W>!{X*^N~2PtBs(WW>Tq xuArxV^.yVde>䟢qP4H}Lu8Z'Gw!ꮼTH L`> =*`!߃nFv~vvXfaϨ,Fs?ȸl #NLV:,?d#qa;+y7އ~xtPc4O#FCwͥ82H!0z+bJՊثA;.9q ARMDe]w Duf%u$9Mel䩗7eL)Íakma|Hv -c *͊J{)C#\j2HszwqW|6wjƻ3 /@(eQl`Q{5-nGP>O]0i.}޺2|%UhzPjQP˲OdgTjh,œ}[O?HhyFrtN|˗<{/Ay^%Gl'li>rIsO<!.bL*.3`cUL> @5%#8UGCF@5liV]{ם%Zm{Y6gppaXpltX&T_BtT~ " `Wm^8gL̐d;x%H)ٙxVZ7TkDJ#xp\+ 1fBӉ]5 vPIz3lqЖ3 ΁Z<[m?$cB6[{ Rǂ0иY}4wg'nĈNjU )%7[zPGvCBrM)-;o'oi7K OR1>h}4&=-2'R!!_&TS\Ӛu¹8:of[XR%6fQe-: Լ(=8r-CBr=ZS V܏Z (0Q G6 l $j#V$jL I%f>W=Twalp~A᝵UMK^ {JX!]dfLJ"ةDzIIV)|ѥ2X-8YK.YbSʶ$66Ȇ`W]RY;&1|m}U$ӫCO?9F9 :HNщUt!wQԴHDwĬ)D.uVl)a3 */a5B&b7b[8g}uU󧐶ZN(*iM#&㩖 Z-Rd.*!9=tp|?sH9<@,dB*s,SXLJRF\zquR |IQ{}ł/oGGqaG_cD8> n*P*{8'pT:Sj)DqTig i{fCְډCiBba3g+P_lHp. sҔX+OB siA[0/̘ o~d@5#|Qe%n/+ZLZ"ׯ@.;}p+]/"+ "H! %ƴ#hdSIT̑,U @/ۤZU]Nr`@UFrrLI9G Dn2?+3B. wμrBH˼p{P$\`[: C [1޷ϡ,.& רW(i |8 <d~69OI7ttmMeݴl=`XWt\MF]ź*pt^moc؟kֹI f$'RK.ŜA(~,..UAn͓+YðCeΪAuU>C\(JF3I8A_JlW&odpLL+2,wV TL:RJTɬOZh%ܹ# 9w!{1F$7!ˌ̱[ 8eTK)QNZk"Zm*fkr$BWkx1@S_|V3FqI)-= 9Na1t\ K9kaPQHSH'-e 92 a(K]>]{Eu_S1IGƧ32o"3!!_&'7n[9)%bUT+JuNa c2 :3i'V8L!8ɳNTBD[t [-q+=^WCu)z]Bw;1nr HqHFS)/M w4ʈ|SgHм˜ג*j~0%k=٭B@,M}'JDk4nGEҴmnMlttCgI{=|&[ACDӹi {nb8byU8EأP Z ZUAFSk w|sČ1s2jZw&eޑ(a3ZؽINYo/Sp5饶Qΰ-/kl ώFJv2oajBQ[?snFpcL7Th1ƹ&!ޅƛ?a]/T (K S\cqt$œ ]4%*8$/R6`Q0綞yN33}-"yV33! 9٨=D*9f n Η<]DHPpc#ёѦўCqB@잿saWuNS>zW{%Oz8/Sm|[Q7Z~p6NE A6lhh+j!U,nTh`)8m\G`+lX%Ď)E@޿q!8(n`FG^3 ` $Tf Tg5>\#V?V$ ^CR#y*7H7Q0 LX@q(w1e\eGDz 1!`1уt4 j<Q 8z]"QQH4͞V:qg8I5j"o-ź~o0aX{s\+r@f mA%ahk.D'J?^Ȼ!]`L*[##Aɑsy)ID9!wn cݪ-Z%'?L) RR+jLh;PF VEO`<k9OAuidfvpq5+()AV ߒ$N݃+FD$4eG_)4$BQQK`Gp;%BHJDO]F{-򆃽2'sp#mB X5 8 VmO> 哧s*fgYR¤^"MStC=>D,ɩsd9'},yHS&Cd>P8e*ħNi{v=g&=ܗ)&fEKu6OW'0ƪphUAypdΧ)9*8Q AQˢpP\m$-" 頂9Pμw%% jK"%)g>'ܿWt\K\6^!or\-,MF9 o,rB̉Rb^$҂,0""Vx!$|KT(^^[jPrf,\$jn N&ez<)8WǯvRL(.dcIF7脨0,T=9'Ns!=YH]䃟 <<?B!ktSE(L5 8H?qt -k-;Il?`)Ф"{HΈ$3%Q e&:FsFN˦Er5) Tz5)8L*8+ FhB}H Mq=BY-XhgIb$@0UqfqS B:bZ ^2jFy)S\E]?WtyQwg; g], 'YYzcQyGw,h,f ,Qj7UZ''FQ!IKU7 \[NC+(K')%h0y JID?/T j4)f 5QDQq)XslvF׭|_W-nݲ"<^`WCcHhLusqԊV=K踿3RR9zxV./ǐ\DKdJ@+ؽ>j~&jwGY'ÐŅꆏZlF/Xb+6\A+Fs-)Je[8R ȥn+]YTq&vl)o(*oIn@9qEeI,Q^]5DvAQmy/LXk>5HPj0sWvsT" LYK1XFL:T U`Ӷuyvah{R6DW"_q|ؚq|63o[kN߬=Ozf/_m߅gaV_]?Xt돍'S4>t}lK" HL_ngNZkӺo봙}yv.8 &<|U3YƄRj%f貗:/^Ra:5\ Њ˕כysmۤnFp,irD1Q W8ɅD6OW'0F*$ht)?3ƒ3lA$A+'s%<\Vb"Qhß'`vPuEYZ'_VoNPxvݢ%9WÓ}9H2Wpj6ʏƑ'Rۡ-)X(x&yV\waÉKuj)I#F `ൌkgY[^+pH8)Vq3\J kOT8 uDOA*%#BZX ?{^F_C.@%9ډ@ E#zDi(D#]ח724-T:~GaoԊ."i)IJ&S '#Hg.e (e>(ಐV( c-5 iav Eڹ(;͈'~p7 ytłz%&cJ HꘒKy>'OzFǔza/: 6OW'0ƺ(Qj1vh5ȡ IJ(h&i6a+kH-qmTdoF,b$)R"_ɼ5uWz3F*>(<_넂դP>ckhdP9u2P0ZI_>ZIrIawqw%*rڟ+͌c8I5j bh~(EݺL)x|cɐx&Nu?"ADė- 帠_=5hO߱=ՌH؞^)/!.HN9dMߠw etu=s*չ8ydc+9|":X,|JELnk{+zw/'c1YN'3-%Fȑv'XG7uș<%M{lDh_S`،P'yasFAMܶ,%\](>罺y$ўpCj\P`̪==j8jz 2Szoo)TN~ çN_m\P@^UOn=n#vw>%Nֶ }x5߾hvݝd։Q;'ڞ")!Nz@:u*"Z 믛㐌@EZ'YJK,9@cғVgVZtۼ-G@!?Ar&(.X1JwSا54ApEV:|}ٴA= /d'7aBo맀Ǜv*3Wn!&'KS]RAKQ] AHτrI;Tu+0>`?u=$}('c__I\K(n6PE` Հ ^5=xuWUTScR5MVKϖ !@ sDI%A)IYhJdsɃ.Uç"3\~>\7i/|}`m>;DŽ"DVx7rxM" 簋ิ^7^ P&X# $Div5"c wnQgsS帀nZI[-y^plӃK/Oeէ^gŧ=_. mMP驀l WD7}{3*/_2΂3NX9E"ͦ&Q.ZUڧtK7 o,r V/~ynVQ7_4ڔ-Vtz>P+zr白J??\%彯Y>Rc3aKК.^ޞۧ`KaY6,X3DN|9l//_R*!Qb{>k{C;d,)w,Em"9T^?cY(BG/"ZJF(j5<). ѱd3IA%==t|iԏOEct>hL$B,A 3j6rÌtT|5 &SB eχ3Q`(A,i%uB6%_VdSR 9)jDX <@9! G ;0%zdA]$IpMR E1 OyWg9 >%ZqF5AB#p^`3k;p`s)1Us!&~][o#7+^r~`^NyZ`;Y% e|XvnSd+[n,Ffj''˱vPUsEXS_PO"WF?i5t;@\a>jMUO]=m@bv`Z a$i)pԬ]7FZoW:IUڵP|AԼ^?Է'[0K_]I9d"qR*#RTF`m(b{}6Շj"<8en>6іϳۄi)B,FhB4EsT&FVy뉌9vsZhCxhju{9-!< e̵iLzԤIM@>0$jLVg*ZcL) P RB12 /1j+Vo)9uRKѤvLVʷXRmhȉhN 9]FH-tu@VAꔾu;Ngެ[֭ 9q-)5jCCnN;X7H֎wmOeZ64UHj Xnz@VAꔾu;`GzȸkV,hukCCN\E t᝹[,7}?yCz+kD*ͼٖׯe/ivgٖ%ֳHX8g=Y$lOv|{DaMvܣ9:ܓ߾YDlxq"z,,-A8Pרyۼ`]ǣ@a'{$as2r2Bv4UBUXdbG,2VhI:up1ke!Da'T(*J"RzE `w*f0U20V`F,aR#MJ$)7ToՓ`RG?%y>MC]l14EF Cliٞζ~X>|ˋΉ/~cHΘHCXπܙl_ wy_Zty]UZ>κ*w7?Z#QWH zdz.F׼v>p aԴeKuh~V5m~vvǝBnčpK$Dǥ53cIC% O]d-=iUҺuWIZW[)8M}4%=Eu$3. 2S.p̼콥-|I?D uYY@_58 R<$1q+$NEf! Jh #*&hKfc̟:on1 GD L`9igK_rn M*mWiӾJU}~6Hb vO2oyiB1ǁR(EDp9y]ۙ]beA3wy)EZ)z C+LT%9GkcyA8"6)ʔ1#hV356˕2a)>E8 T(`i@$I  r\Ww~牄`q{S`>y_&_o;:?gÐQR(40/d0ASA]S休O>wcϨ(Ict7ӽQ +6'Y+pP%${BBDC`@CpT&8j^BVgu"!(3|*TOah4E_hKbJk $PDJ8ڝ b1F:0z5!1ZK$j{vԛBflۆ EҕGkg3HzLF*/Rސ萲먼:DFpgW=9!KjZy>rr@RgκUejw12#LB%4h]Z|@P6)jxV\kM QuB&-Kx'f"!!](|[B68W/}~羄{?U?)BiE g+CN@qPD=BRzӂw۶ Vi N2pd_~~eo~Ӟ3G o)Bzw WItg$ B!S?-5@] R@ Jq m<"Jam:a;rvZKz2`1AyRtx(̽ B6RW Gƅ`❏Zq.6$UHf)I"y f t:^{?7IORk]=~J[ýIQ[`SFPդGqnpΨO~/2%cc>93}쁏~ܤ 7_ߏ "`HU F(N UHI˷G7 Aba2][1D:ܞIxt_VlM{3 V}kړ:mS p(5*xBw 7D1ԠD8zD|9& 3YО)SNl8-"JSYr|+@$*bg#(4B<قH-qL c0U!?1ITh~ VeBΔYSކh`Nj3Bvzw2m1pL`W̦%%l1ḦQBŗm1# }SNQxf%ژf#InY&3!;y׌%P,L:~PTbZh 1<LxݩwAb+D*h:@9c;ޅwu..wQYoܽۮv} \noгoϟh~F%ci\\yw"A>!ʞhJhB%#bg*iv!*l:g!Z@!&;B`AD0|# &l/H8%˗OJu%;*~JWlJ%mm_\KGJ FGm؁$蝧,O9kr<3S\M8Ҕ&y{6"PYZFy3G6Μ3pɧ_A6詒hqly_HAHYwQ喨XMh8Sy~ȾWbsA0pHOc[b9|(6m$"v^; $*L\@&jpFt#q mņvLd#GX䲞Çxhhk't:SdAL'tPI>SJ~i@ܩz8;eS^/4+w'B .(!{!NOW+:>Hq5C+u n,ޙ%Gri&@ Id^\P1Ek착ж*˂`&s5ӼV/;4ÿ&.l> ~\r&4\'E ~rW!e.x U# >rD+ #F0GrEk#BeG?%y>(xz7rJ_n?NG l L0aQ 7w`\{oϣ/~Ki»JU҄wUqI„V#:$U9,BXC~H?J_ӂlG9SyӰ}y˯K95qe@RJX")v,$4WHHRKfI >Ti"C-XjQ@B^ :59s&f82*B8]H-cRU`%E^P_%0yQUa !AKfmhbzT&W\ 9k,j‚G0~9o"$( `F E &Ĥ@@ɭczp!pR8 di]-h:/HHJd4بHhc+P 9#݆oǣٟǩJH3gǥkn Uowxzr+ߺ?M4/? v}p+8)vxza\ Ό̏{:GLG>$|\{`:k{2}.+<fBGqicp 55,-ޏS\~FԿsm?Ln'߹[@[x>up]߀p1VZ|{t;,8H[?`C{}qM"Ә0Ck TkBF]zz"qV4y1mK8kT1)GPhE8e p遷? Mu1ƌ=l3ea"QwS0N~]mo:+잻G_ w➽-z޾dH"x^iOweǑm9LI@29|f83·wγo/_y#sGlFVMo| ͱkϱJ;(R`N*]xz<ۧZk(30e$UT#e0%XFaT Kb*R2*P}Eur_SP̳v< +7KB @>݁0i8y%MlbH%LQByI#c$a2 ֈFPa6Lӯg%pJ)űV 5,l%C+k-"#&Ia?A2bloӧ; nفHU p(3[ALqecb2L &=y}(SӪ[:S\|p/=K;\?>BG4'䣝AÊGљS$h#OBPuMny:=}aH[_XE3PΓKӾxr #fbX##5Nhǻ:kݲDexY>߫K^<ǩ3PӯKsI~k.B! R݁|.nS*3qB࿾OyJ^6Ɖ|.QЫ5XC+z4oΟlix`[KGǂ |APp A.) \XݽkŻaE4OlBӄ-x&8M؂&]풶7 Z;TahQO [ԘRTK1Y)풀zBю>6,}Ll^}M<`%m{W/.ݻDJ[*pVlSb͙<dЦ~g Hji8ko{-]Ruf9sKa\p=R$=Y7'pZJmO!##W=x0J3\\99}8O 8L9w"ʿΒʅv; -iZO{nĴts+yG|,DZ*O?S*HkޜX&o]c.TՋxrINb*è{nboq@{kH~|oY!HwO.d.Yw_G;]z> Gw}oCk|OqKr1n|'h[O][.@:ɨ[gc1i;Z'4BEwXNs\R/GLtjmg֓-ڞue,,}u.R6#%cSq{۾ۏmf .w2v xMN4/~u .BҊQ$YsW?kQpt zN&1aOݭ"q0"8*B1|%i-n~Ɩ{zFSXo5-C}u0w_1|ƅ/ݸtS _*FMx"fbb Tc3h%U6HIFD'VrH"XYx-QPi% v슞?DJM_TSl@/֯D& [7VJ=Zro;oW@Ӽ `n l@CSR)NRiFΈ4V +,3c,KbK J|(P]LӅX*ă"֍.{u-U8:i$gFTĊeDLa'¢I&HjP[ aF+Ә"kNXg-שՒغ b$$HqIScTrur8LIBڪK.bG午򲋅.Rݲ0,(0..XqQ9@H3 +(M`,jexXj$߁]mѧ M:y)F+s?֥wּ)Me}^bO)9N~ᗟ,Xd>O}챔0ˣ39/I-0KXQp2w`$bN!8&h@%Wo~:c xO@9+vp}#\GgN c ^Mk&r>l!Tsjze vxB*Pj Al09ٗ4v-nC=Oyw~IF]]=gQ9zS;^tq]oN)Ë!ϧ> ~p?ȼŇҧ-NC:Mgr/fmnO&d= i _ EhyWXTa9)j/hJlɯ#M+x6۟^ƀ?Pkg|i?sŊg֊֌yE,"u>tvMHxp4uq/Z׮N@-ф IPK D7.2G6CB1}HS{G|rVJNSND2͉4FLIj҈dI'.$ LE7sUBN$$ߧWFPv;}VvzSX֝ bE2PF`"LDcUSxP!e%*1`R*Y Qxׅ*Åc[ٚaϱ pCҍf`10K2h2NҒdXDh,"%*WThp-zxNg90CE8xp2|\:7_qz븴Q"#4~;gn9N?2g&2gYߣYojWV^6Od/ynx :] ,$fҰRXJ%4e,Re&`lo)v䜍"TL0&OiA<.EbXڴT?*gm7zeVj@s; >_/x 5'[.O G!5?߫Ƚ4Ɏ>:N*k.]:Yne:?ڲHûmQ\ Uc!S RYZ.VH X=v֙YNs]R7G9qɛ[װdQQ2@Qk/[ݪWThEpV-&a(B7E[m ʩ,ģYj5*ȐHP1S0?LcqB< Ch^aoUg+!*ƍJ}'N_a\+NLh-CxiJ?Ne9Ö:Mbh(Ii”DB%co3T~e!a54ZYT1po}\SXhն HrWnp+(GZD Ii)bW%x 5wD1hc兪PIF^AMMR%c7(~e6R0ADv%6:M"QiO%N}:^-#z1ًħJ]MŐ'ゝ3l9?6?m(,,#dΛgbj>'^2IJ.ySIfWEV(̻%Xw"k1J7z偖3_$@lqrԓ笈qɗ٩Od\ o_bTW},j7օG?]el! /?Lgpw"[ѕ`ӷ{BȁRQXDw揫3 Kk=Db ~ohm#!*n0*ݍ _(ލ>[(؉h7u'8l[an>&j[V  . Et^%<.@;bJz]xUM~fs0cK1fqr%o%$_>O|F]wg W{k`O6CnR րGwmX4q⤍'-0eY%'>A%e%k%$ b3.a`cY#$XS35#hH C&QbI¼K1%y7 MM 7@P˶/wD)MF$&MbNJ ̇>> S5na;'IMFK8~7~ MAz40~uCSl=~uN9|{ ?OsQf8f~tŦmUWDwy[5U?=cߎ;7pl7/~|w/.󒻧}8/>봓6~??F?~⏫o~6sE8;gG}}^{o̓Cm^{~myksd7ط?tsV aܟﲦk!9x`kє w1 R2#qq M1?k٧[SFһ67mqRsDz)=GV68ioӞ>OkW<dO?Yz0͐En^yU}PO1z.gd<iRwf: ߇${i s:)gba aѭ5:'ѽ rN"W&_G+gRVvS8s"{#rMX8L9ll--,5ۧ%a(f$YjpK Ys͘&Q HKKcbC@*Hj긱\k`-&zE +NqŴR`Dibe#>\BZD'kEA5\+.VDHєH[XHs)#;nxu!V!VD3@`OKRDԁ$J6zB{)@ 3VX}=tƤŤT3|xg3p(alxt0&vw6(lC&IAs$0K0PF #acc(qjrwrj(XAy@1 #ExaFW( d`9bV x !tXa| p(cZN'oPI%Ea !*$tePLZ¸gxtg b)OZ+ŲEG|f@wLҽN-sL 1P ÿD/}I*\c!/'y::ؚONFR ^'Z\ l/)+:p,wͤ4P7Zl', _i'\_, ȏ;A$7*2}6d6w0'/8*Ѣ!]ENa.{q72lܠs1z|@ÆHi-%gBV"gxjs 3߰;i**B q7C'Χ\r\Ϧ@z9ܫkw]𹬯㽼p`jɲc(J@;ePf^$9[ÌRo Xx&cv9`̼*,rzC!Pf\+wq=:9S\,EN/)ju(fn ߻Q킢M=])N>6YYܬAZ]qr"7$aoM'k1S(^Pቼ:ڈJ4V Z_qеUw܂5抑 Ok5AKոB 3~6ŠR"C"є&s2nF\Fͼg(qF~8;"!-瑎M V"!,JL"t37Q3Jk3h,Xdn(#Py?]za, qT,͵A E(7'n*>oqadBvU {.76sޱ3lb y<ءsx]мvqdzЮT3](t?+Yd;vp^|f7i4aV#1ua|H>\A.i8 W\rVY%İdR8s;‚_Hmؐ1tHf~T61L=*&5C)ڶg4g|+@/4VtP! :~UC4ޔꭋ7|<$.]"%t}[!L?y٬Qޚ)r\ hδU vׅϹfIAJvuF8S|Cei% ܶ WTmM">Ԛ'S2"۶ԺY}/.G:{^>0dT>@Uԇ)pUm ? Z3$i5%m>z55Qg11R*ok726iozC,㥽 ϯъm5ךat<ݠMxyۉtƇW/ޟ5#xqN_vgS@BY )ePAZIթdRRy=8+w9d8-K78m܍XϯN۞}/ yoHDZ9fb[KZfIdЊ XLLXPk,.9!u",%"Hb]LgH,c^-Q2p^vF-t=.\PqhNFJ0c HLIZIQJ{V/$$p}GkqHEFr/!gf68Γc!i il[Mi$J"MR%,~U]]J&j@&'Ouu/~Z|=^g3#7N>M&p!6t& ͯ{W!v& 77XpSAuou KCcLR*̭+Xo)eFQQ$|p<#P#UO1sJ%XV, SFk1TGTc1-C1:<-f("Ԩjgܳ8oɗz'W̸?{f5&Y6Xݜ E81*d!0{nIRH FB0$ |GB),u.ҌQeCă$C?p^voc DmT?T2FJlT-S̜10 LaMLzX`iV7{*!PtsC1sp\${? ؒB%Gz: rkcC&L9BQBrV$!ZRVx nؙ =B)SNM^+YMi1 Ƌώg>g`xSz>.ѦvZƄPWT@!qFCEݕW2GŐl_OFC( T"& dIHh M % Gs)]ͭ"Y0>%{BL%TA`nRNsT2'$ҒzC`< KS8޼ mbX [K/exvhɢ2^D&]??_~Y@6?߃\ļ WIk^[wUiR3;  mE]jv.k\<UC'j^ ; kr4Gh>`ih%阚mx:&M &v~Mi|Fџ>l;4tI_ 桭,-:XuփV\Yk~|25!ЖxE?yi$DD +u]nڎk<]N#G@KS  [SBymL>=$BvXv|ygIbV(<Ӑ31{xMloxN"-M v'vS0oVSBZM 8YdVjdg̛E(JNg9pQW8(u W t[s^_\ZWG6 ΀x |qdgF6u/ [ X"pJwE&^q0[]J({r4kG{k#f `{=xs N1~.6@hJ3t&Po5fUwV9V5M:{Z4VJmGQUCNt=i"rJϝ[N5E(+>S4ml//DX}|1lkRLrt8Y6ηg!U{z RF_8/85*r܉E>(.maQPˢ.DQB^Qj81XVxs5H%+V롼ir'&r1Ǫ$j (Z' o GNCʍk7 hH2ݬ[ku`DYWPL&T:}OÓFR[Nm4TKlRټj3ØA&(f7#?әIgw쉓yOxɥM>8czoO7glWi2>Jx'R!|<3"n7 b *^2rzd/w''+$jo\Dd*7ݵ:yO)Y+6*ڭ}VnydKV|"zLC{HBB%sfm J;8wE@&Rt̡ʇviJdcjŘVx<Ӑ ҡ#R]p$؂߽U!kMIae+x+wS;N|5@"ZN`UmwZcѷBMUUQ.ř  s\e関ueXLF]}81Қܶ9F*B!%4}өJ2*q_oy{[T*yI E77 xG$'z|{&r)hxdkin}ѻaw:Ԅ;033cu<;]}.Y|t?y6Jx SsyyrcCnilh !ya5(W5nK8qBa҆^txΞ|eÖ[CJV+ݶLոD4DR,lk:+\iT?z&P7ZK]z,?%_mKI YrTim)QdN5} Xe|qH'{o0x89h+85t P7(/jASV(y3ڨ8zq' m-٥{ψ!GO?W=i8<#0[ADmUV<#p![lr4[P5d9 L*\ẍ́ zK,\R_" AsO_k//!d_@S Kmo%W9I%> K Qo%@W8>Ed&r%;`5c{DXۋ~qڀjq\_}em_aX1tqw.. ϰwr¢{z8 cXxq-9}^r [̩Xc+"+TָT@`#ŋW9 ׃%b;S여hK x6/E(B\ԮB.G`XݛK5p$ϲDxB4C^r|~qx<~&A<$j4PArKioY].CN? $۲øN.yqJtdy1BrߍpOJ;nXe}o7>vƵ60I)ƹyt j$7WI˫b1Obk߶Uvͱ!Sڵywe\#iQJ:\8 h\#y[S@3𸘪n׷W{AzF A4a :5šNul&h}H-4!I6!K3r}3V)?.?Ϸ?~z3&9,oᵾZ1a*q$CNX0߄ H0kd,MR%_FkPEW n54OdR`_ְ_ncW_v32W+3QU~` ɽ9,8IE t Xʁt< A$Q_!<ݱפUҥN@ꐐ1q,QIJ(x"i*P5(Y#9PC%FV8on8=0p!Q7)gtf68쯷`5Q[$'ۛ*~t* fTcO+: "@ߔ}nn==>=>ɜuzٿ[cəl,FO;{xF`ςh,.\{¹!!yȴ2|c8?8c->~Y2c#e>cIy(2/Ovk/9 <9oٷvs~Q[V|"$Sv>]n؈3h* Os|k7LKV|"$Suޟ{Ƒ 9~1[ q}8nKXH*N?C3d"R&`ؖ8.ҍA=tgpQ[ӅFn-pW΢Qe2^Z7N+Ѹu Oie8>M֭ L44V&S"Nk[Qcm- h+n@ -#P(zT7> HرRe$`&,(aQ xPbK1/ۏ}+ʀx] &6^0Eb"QgHz P"WJSI%O.}(UMNN.J:A%)?5Jܶhr:b)s@៨QRb$ô?(~I_xRZ-|ϳ[q~ 'Az: [&gEq a -A_x6ǒg{܊l=m<ݸJc7m[j-J`B=+/7BKFE$J3.W ꕶTH;sT) s$f\׺"&ܶfq)`"46gll V3&f.( m,RD"9i4e3䝑{O1am$ MUJ YLr)v/> m,EkᗇW VaWBIejF'QڃPF ʐ"1Έvy4EDvbh+%%QLSG] {/fJ_oF3kgKT򾚩tbX3X*tzڵ@B9qѽw}z :kq!X]LVH9qV"WTu>+aBDM=" &ZA^题)u%޳pH| Kq,YQHz~LL٘K9!Jo T򰅍KK2J4V~L ᤫb:r1cֲQER8b܆,qiV8LJvr# 12L0rW Nr€5k$0pdLPŽtIULJZLi$tԷ1` "$@I"{+djC) d5 ~G4)0 65qbȐHS%/9FX4ڀ0^epcBNV 2i*6%wvmXpiyյ,.KZ )tyNW.8+JPHh$ \/cKppQDHݝ0IЧ*?` RFv8%Ôe/P%5*d,SIyipoJi1|i]={:F_T]vvT*.t )F (zT JTyC\:%" Ǟɸw[X=<]ʰW9Ƹ XaHcڅ(aΈ3)1TY = r֐NR `.p9|8s7V63W\`_k#B",g%2bhaNAg2+LY Dyjjs]L,2( YW`\|;~n8̷j8|쨾"[^TnK[7c2~?`Ve,}u$_3y³>G@]HB& dk}q[{wc6O)V1Υ\s0-; sx'{ 8Y2íf3[0$({c(I)qaS kANB4H l7'1_I 8H׎~_r; @ ;m1e5ƭeP0cR;] E!ΙBVpmx}8%'LSS}WR[d8ny!l6v5P^uHrP6׹v%kGZ6xdHF λZ{1u`@ v<Ćp!Xjwؐ+akƩ;{N@ >4 %Ugt"<ҷQ iWJwɐC;KK2ٌ3UݔZl{2O;Eu5[Vm~*hb-yuiX7anV~t5F, ?v^vEŽvIQ u~ZW@ܸkkVYǪ=Ujz͘*vU4v=EzO]qg[(Еu^ '!$kӻD vrj=\W")"~]௹{7O?h3IJQ*ĦNT S6 ̰m&c<&d3$Hǰtw U铋>jۯ? EцR g4ƔT,IZz=Mg$wRʂ;H5Cǡg6f;O~ U$o`E\2j?WU r+fw֭~LR7;tWϏz:u`".2kX" SLb!e&. nX(SV>'+׫"4Z7j7nӿp5`a VԋiO"{(FH,bI*H48:1ohOPgjR1'І +8,Tkoj0RB·TpRG BEu{zpκLSQk ӿ )0=a$`4ȍ ̚W>W_괲3)V Ķ{[B ZaD+%BU}b.Q<@hYk!akd\ǰ⮵iݷI#\0ⲐL^$#vrHC  J$O-5YmL %w‚GGg$7;a,ŧ&K0Cy*ց.S& Tٖȍlm _æкMQ+cfFB$AnE<L]b>tKiEmIS81B' t|~U8 M=Ig;VinFx d~om~S~4KTaǑM: ?/,s2y9)/E> XTtx>y?/~paxtPc4o_QW@`nZV *Y-' "6 :h_Oui`ć纡X%o֙TxMo A e(S:mڛl;͗5ȚvM`&Cc>&G"jUl{.o?fnb ) HG P}nG<{ )R:Xr37/V:]PD,t|/`{$J0{t;^A@:AP,{ qOV@չԾ&m [Wl6dADR(CAb#k/X$h= נ-8cXya^jɱٯ'j.ƹL[Q!pםnnǪsp; 'bculԋ9w c{aK='^5x#t)4'( D4]]r/#Zh9ଢ଼#M8= SSDZa9LOiTKf:J\;76Zؑ%xOvd+c7zv]'L;>i]pVn_7"Һ\X/ϯ/]./=Ox%{/3=gкE7>ڳWw'nX5c!h\|D;NDq~MMw -r>Uئ}|kPr.OU3*Q+/Xlsݭ.>qtSc8nvzڽnp 0jG4[ծ\o3t{ҍU '*`u~Wk_I#Ϙn'bVG!NZQx fMB=' r̽,ʹ_r6G1ݍ 0bem˩d 7ILxYޡ)LH,՗ܪ;Nz`fs(kͣGXiTO5:n`9#5顣F:r 48޶t@KWٗE!|ƔɊC7.-RruW,K1eF([JxnGzqNM?J`wx= 5܁h4^ĸ A He^ 4X8 nm?ʛ sq~W]=.W&)ͫ 'Qq}P;zpcwNJȔaSɑ9ť|ZJ*e8APT [ۊp8lIW'Y2Ϯ(8\x|1XF\QH$wiZ+2^,ph:agQ%1HbھEQL0}94gRbQƋ?-Ys!>7VI^kS/ujN zX yZPf3U/Qz7Hїm}g-=l =A=j{9F E' MlQ2ZZ~{Fxgl8]x!w f0>gkλEQl4X*V6lYnzR!4X-{ELh7.(c,|: 3X>S c]JϝtfsMy'#B! ;|}v,tσb;=9;2x|ܡS)hσp#I1R3iRȾ;0x!9NCbQ .ߋBl4bXS,{4VX",bLCsFRFN 'Ú@@gp;&0?_>a 1Oe$2DE6"3SS1|@"7(6X!j5D#^cnci+J*&pP _+,60F?ZΝR.3M)~/Ι"Q2#a k `Ǵ2}e+ b&>ACH}Yѓ'#mXqY>WB˞uEP NTۙѵ˲G0}+E=āw"Z1K %B4!>Q\$#3-T ÜaH2!mҴZNjKz^xx}jc6=IԚ] R$|~`Y^-Wb}F[<7X{Y]~{G1eȱ*בi~eedX7=EL,avB=v{6 cPuΰGJ#xqR!"ĴYgybpm=?zmrn>#ʁdF^ٍ>6W>FM4ʔbe55r} ^}kHw2) uwZah1jTߡɯ.\Z 'v5m%Qa$C}퐺V),5͞^PP=V*ntbQoQL!{C٢H{װAO;2$KJ3W;Ǭ3SE,Gbܥ2T\!'6%wIEz6Ee|3j\PD-=E ,۝N2"jl2H$jm=(am֞ۻ".\3ANz"Fuc ߾l;)VO:rډ@>|LVϐóSF7}p <J*%ȾَEv _ѧ~e7h9\}ԉ.'4=rȾX(}GiJR1>MI.F/lm9h;zBN_s}(> A嗺N(gtj̓y;tathumFrZC{|3FV |%DD U#Ŷ%=>V\ 6jTm'{0,&-ZIH:8)109H:j"5&dR~}H9(+<7W|JΦa|/c~!%^9B[^կYѮ':u9<@T?~;JCZ?Xa_ҫ|]&)29! ' <uD3~~W7UMoi4N-py/p& 5?1KGq^JQ-j{qBd88daiyŀnQA3mqK5ͷ,_4B;Mid/cL_e>n?_kEq!暓3!K6>5i=m7V@ma[A]625Kim`iŋ0o[u)zk^~ywtaEj9 BEо-"O^?ɖ/Gᆇ-¨ALҸo߶CNN%ݮVKT-\e9T;X#(RORzgu$yy٫s>~|YU/'3}+lֽ*xZYmB`CtJ $RwhVEI'/W ~۰Zp}9ԯSJN*9Sw0Mrϩ1cUvIsFÉ'oi<+ߡ>-w!Q:9w $}X$TOx꙳ϡ,?[0EXr9݁7[m& rLެfl^[8``>@)䱷 >#¿)%~ʢ*ն^R#AuA`gjE7AH1J!3ef0ܒ6bðO@0 IVi࿘228*:H- -9pSF"-GZk" ֔'>s"G*%xl val&rM5֜4*rC Aks$1Ə4J@8% ΃ NaXEB\<F$ `0{o#ɔ\kf~MCj@P)nR.s11vF")!ƜFfi&=C3Y۾ &.gk5";x iR"`e2XN(1`+0JR:8$>[< LEt\BG XkiG)+? 2;z%!\\!2cW_,?DP++t r6Vsɑ?fܢNZuKhK9? ~SW|Հ1-' (-Gf\߄WUo׫^'C4M< Vmv >3=,ՀϹ)gÌ=\|y sS(R^=}Wa1mQ |Ն q0EBlB"ESPၥq%xx5bhECMK! 2kW!"MbFmG<#pCm)1}ߐ&mg`2҆^I\03fV nzlgE.!63QrU pUC:H ^)c^`د@=y\EH`4+Xȯȫ>ӮJ+4[^5^b~Ny2XU\LvmRړPNo%D>+?kq69{g ~n7u??n&zS36K7waU Bz:s(VO$UE٠ ғwj^_[.w0KTsMhac=䧰rd!&ڦ%a^jX+Y* {ӞR^Y5-7d(G/B[jNd(>j2*Y;P*_z_l']pxInݗҸ_();ma؜4X;U{{rI k'A [|;}WAQ pvZo?\_Aӷg7%S` ڮjt}~,1i,4*pp4CK+ % _o*u08JvqN/'r I-uB^^EѠ ŜE97KA//Rm I,!d3O Eʊun)_%UwWwu`qRrx jHYhTcGpFljAv)/?X8sj'q/60VrǺ(8![w%.g&CLJS7?Fב}`$v|QUJN'! RrDłW+SoM^,C ÷z15augs#jC7xٷWHLk~@ `L:r&? {$'a&\KG$j&s: ?5OFP58]?!NZ"ʀ 8eycDB[*&A%rx h3أsP5s?U#=D:I8/^^wHJcC{)U*|ӌD[j{'<osad?l0="0  EC7Nb`"T c3$$WL.RPplxdpTB*5bخĭJ]̀ 3${bWzN'<:- A *I}k)QAWPTU p-&bl1[&x[MԙD$.}%&0$I!j9[jWL bVc(\UEv~ e6Ul>[R,/m襩Yxu _rVCe֜A!B$WWȵȤu$~|u8AP$]2cu,?"R˖=-=N9 U /o\pUˊ&<㨓8骷bդz*<&}m!Q攋pHP3cf8$3\`u} .>!tx5[p}"vm֛nrܠVʒ_t:9*9/Pj"aC&82d"&\ LIKN(1"VqG槼aGd>nO`n5ckĨ@t.f)M*7Nx55ƠXw|K&[ `>i7oY xR+XvhD(*/@3~+1Xę6L$>REzur^qq]-6Xt G(--C_Ebo!Tp6xX[!I_rX!#@mn8ts=OE#<;s# Awb>NI-gD#`5Zh]ЌTRd:}ɬ GfMiB a%Cjѐ*| XI9jK#J) 'Q"o|=6e$C5pR9#@zF2f$Gb"B҇ K޵H^ı饊 f_ca1rХ{ @0ۊ) F应_u7@JU0WQnXSqYT1q:.8A1uMzo]I-۟7C_.T5IƄ!V{}GD6|˔F~u~qvereOTIgW_ vxq^K_܅'sQb l>FPQɀQ7yN.Ȇ1cqD"Ĥ(l8w46bh"Zӱ 9;JEJ=*2!fUG0oܟֽӷÏgK8l\F{lpz"*:}{rHI^R٥l <~|K!J [ݬ5<;<{ EAY63L?R߸wfH{< r$$&Hx `X Hλ]Dxa]NPu}mMߎ(P(~^Ql8wDzJrGפ'PB"ʆsG&#ON`0wG(E]$TpG Gf:- I8ŔIՍf@FDELܳ0%G #(RzEGP4%>+& keIN뢃D h)g߬n`;D$E5 /u:ГOlEKB+MĀH!L1sz1,PoG-ͱyk+őвqsFM>UrHuOw榷SPX+(ktO/m0 N!i=`Ekjb1\-a?$cM,heVRa GVګls "#qBpL!efaJsc=g&8&\կO ۛr7s)AZF&qX+Q/ɐ&\RaNQOUs[{gw Rǽ1'uHӠ ,(*KK)qNqI8 *|WE%w`Kt@gJlJv, "ox3[[bgʻ߫ت_< Cvj te޻]?E,Hh^s~>KƨAGm/ v' w]5^Oo 1nOԜ$ågla[}u.,FwmmVuq}vZQǶrof/~ͅ3d8?*6=A5!ժX13 `%%t 8*HWNKY.\f^b+kDZ2$ Mtj+y~#qm)m hnY퇾M0:1jtbZWƭ$(FZVV (EISg(N R\Ͽy"6HS%T7=k6qdLUw(ŘJ GGW3k[[$(\፾ta!Wch„萤h= <=+3']i-]iSu L8_e&7?{㼕c'}qΨ% gW!vz1Hh'~[v˓kcfO_Zo# 9zuΨz~[O~= O1vp2;"cp &e+ki~}~e^5@¢.c ǜ"DJk\wZŊU+دI7TO!m'1S`#Nc"B ? >ԀRziaϓ@?I3IA/3t VU.tu`̲ѾZ%%N반C)+\LAG [qP0"Q=#M#mus.Daz$lOx\O˗[=5o>Zz)Ieqta:Nb8<$, .KNnmO,ncnfK;?w{swe/Kuyc|)楂?z<; f`jǏYL)ۼǒaޅD>{yn.jFg%*9!1$*+$+X (fqqND"Ja 8`#<c(=zp1'(tfIYN0nXWx6Ef7KMJ o#-R9SZh@xKV7,)T֛#0hQdyP$)RH)-4 <גqgaaf>uG~ } KN12=vDCPV-H.g6_ȩ>4|XK3 UNζ~: Jܘu0aFpVn&Y 2i<7  hVTE"3<E F9I@4ppP. ȝ[%' سg VVJ7H7!6(ب)GN-Cy ^_FN?~:E/d{zYԫgQ겅d1Ё`T!4R rJ^+M\a-,^Ćftx QOsNގ>İyP ̣/z;Zs+QimH<;k oP& RsM`՘-\Zs .x0qn sq%=txM 5&TujQV PhF0'Tz#rT\;jEr9H QԢdVP\+(bYo)rĺJufbʁLTHLɇ1JH J9%:b8 `F^44f펣{+ ]S6F5@6W 3cH.shy,pI̱+HjZˤ\DTA*vȍR`փ.#2Ω< [ Xxa=kpz`νGj,+D0Q F I 6lָ֐SȩB|owTQ:rZV邅;n naWN>~V ]Utˀ'u*M݃>Bu.{N;#O~ffD䇻Yӭ ִӘis2z zp2xyzߎ O֢ /ZQ+Sf/)LzjDl&V;#$֗UG:Uˉoe^)Vkԫb(96.+IZޠcB gq!zYyYJSa-d:/r&3g3c"H . y꾨Vޠ/n)в 'Eox8*\@LNwI!]N"6WJo>Հ^ uc (^r*nYh7y"-:HH@'RXIwy"}f67x;WF;&ͣh LѪ>D *y2 SkE,q4dJUOMkkyA ?P5q͞slo޻ tY(JY 4Tr%~>"wUtwb=7y;r)k5{ӳ}2;m؛:\ J,rz_x&/h( O2e>us1"3=54@}lWDS `fUPFa?6QaOVѩ8J"cU5~:_ ŖSԖp>%&g`j^+h\7?vS.ҬxXHAuh{"HWE.C'evEY̘)axgb+u^?)ݟ*B&I'TѬkݔaNjV^VzLTRW r^P$k=v#c ykY^\gKX$B1#(5$AXg "őwP;JM:&%m 2 x.\r"3e# af65w1&;f.oLEngdbRxP*sЌ#L3#2`[s.E xB/ d:/׫ G)ly!SE_H5Gz8!N2- b2pa{7\eA]ٹU|cigylU̦[IlyǡR5/GXigq^@fp4!BUvX)]UsB4w8 &R0YF˸UEdgscȡں((ԾCZΓ8 L\:Exm=XpfcSpÕ+ixnB@4JB;!sDwOvt]pDPdjyp_ wϾy-xy9 [ĕ%AВf;B@waa% ?G4 $9El8`"n@|g[3*D U#"=jx^% g}+çJRuS ]% 4=( uio5\S qFMWJHR*qNhoyu_Hr/곃nShT?1Qb D#ۭѺ #קii38$@ϱgYxBQk^v)`G+p_UIzw>k(X!7E@$D&C#=^,GXLe6W(Wz١}>}G:6 "Lðxw.^@tNwW?N@7$y5Oja26$m(M>"([%B(,© e^͚CwŸtA A*ϐj5҂cR锑0i*kf(X? ֱ0cMX:?E6$ s)y|O&D÷_/fw3=e_~(8(F ;|d_x_aE1A'I̜6.'$7񶳽7xKk(U N)#Y;\U;[ZS֏ V*,[J V??y;QkIX4uI;=iq?yž˜G(r$+PB2D%pYaK3O-0K+(/8^\s(4V;ǩR][xQ L-K-ШOLO#H#,Vmq9. 5BP6[<^a`ʥɰSc+m-Tfg{lVwJG 'eLuz1NlSAj.=Ljz*@odx LhQPdВ!2P dˆpA1I OuFx2j Jt5PuP)=7sW0)5? ),70;Y@\)Z rU8A UB)f152(\P`(~4yh9 K@c}i/;I8ߛI? ,,؟4jqpq7ٻ6ndW'!EUzc+fˮξ `$&ʼnTi )rHΈC΅MʖI uf~}>t"H7t=>~x݉x2]l0o:-?BOLό2} 0oRXHD)owc^T tUqYZ82XE<7zXI!yX %G{ǜN `P]E~Qj],?_E 婊^scpHlgi"%ؔ2!!yf- -q69ߔ!]RtOkQ]ї#p TڡN;H`r=QN߇l OV2h} "|6|f0}N;[H vjj k>w; 6]2|`:[[d?tþHN~wt}u%!:#wn(2C,Qt5MIߦQi(02fKb,MbJXGc.XC!kkj:q%}dckȳ/jdB}s*ps$TZu&I#]- ^jŐD$H*K$ g{%7 c+:ftL}S&(j*)]6a@ Ci6bʁQa.3`}&qr'}%"zq ?}=rU@`~<*k%djͨX8kiNx_KD*CUR8}ت]5J[ծl8SRݲjRjYHcVfD&eKe9VfrZRJi10:DdN VCcl:mcgqB00QN`F;8jN(vn+f0fyUfBT+2yé֒d$b$Fm܊D%8$*,(rT3D, T3PD=߼iCn%ffwfh%X %f¢<U5ZY}) 5?v~{*x2Ӳ8`]R (cDȜaE_<^Y/Yk(б0 Dxmͧs9 d-]!ll? &϶l-~4X} j2epd ;KpKǣ<<Z_:֌.ukb+Ipvr튩,42lA&it> ٮ:L|BQ)F㬘1mQ_UN#'qb=R<-J< $Z?p/$σ鷥S|+uN÷MOTIv- X ~3+l"!/ 7*ޘ:xVŰkܔ_ܨM)j ; !MY! ~)bl ̐D5#|5ez8;A^VTn3M5} Vpv/KbKT eχ>n"/&_Yy'e2 #ځ3;yw s(J*4&vc??- f]`]`]`]`ӭ@<6:}b8AJ0hІ ! XB%I0i5:[n/ާE>C3+.oi4}g4}&]嗗6䃬F^$Joɩ,]j*.ڭaţws?A_ :{F㧇 pޞOfpӹlW~|xURWwoj~rCPubbsfADs$k`E3I앗J3#%6%/䲇4-rF94q)ƒXn2Q0ќƠDXQSLS%S=(+s^cxKPa*,p9"JDS@:69u呓d+ݾlEXVy+0ékb_/J3mO9PLHP]U+Uâ}2.O~w.C!Nk~x݉x2]λ`=R?" CxtxMRq CN40abI8_c՟a%4NN4Zl8jN.9)1%oKw o^ &(׌jt | *AihTLV]GJ,Qbwr7~4#2ȭrE8qN|]% J;X[eX!F`I1q`9R(\V Xh$m5 :{ 41ukWx=+4Ւ h8$Lpybh .  pƢ ۋl!*bj5w Ax|n31ba]LnQMKRM{ \8U/O;ܹ֬g0"Eв֐i D]?gr-ϔjZORB|=E)X=%8ntQT&˦U)O6XyXEH!|Cbx$!ïQDMOB `%7]y 'dh5Eae˰].Wj^}z]@MK7KJiRaL2Zy5foӇK>+T-/ۣP0NiI%g!X3׽p4hR\RJ27 -5ZZŚ&SGN ^)-f֋H?*( #-q0X]_]YD EJ FH`\ CcRcɄH-3K|OOU.o^20 E5u벛OK¼D^a屦9Z(fn֙KKQ/#5֙=S`=S3qY0Ce:bޭi-"'ŭmϸk9m^msQ "uec81<@RZ#E'Ņ!(&;O,I(H\kqƲE[m;|ȬÙQUdQ:cOdIM"Uw"Ͻ$=sF=!h5~QX3LhQB.sK[rNpSh51"Mi[QHFJw㍙}iopw=>X?݀# D3u:~3f!dQ2GF#toBb\&thO?B%Vӏ#DJTGG d(N@x>Ҽ]Z+ =#EZ+ =JEZ+=g4H&zĴZ:kNj%g&gľ#$ e:|-~3oE|_ܞ'&8g,<`$@ŭ*yt#J@6is`ZʸR6<&66Q(QA8G%-֤ք{pXIUUʚJQI`EeE(0PlbH>ZH3OTb&XWeC?R2Ma)Dyh]5>VywWJR$gn2_}jnz7N, =Pe\}lZܼ^%/7~ѽƧ1Jy?~ԩ|C|ܩst{3CN@pj{f^9VO3(h?쎞@=J idb ZLrz<7Kb&$tAl^Z 9. |t$1EXc2ʢfABlꚀ\y}G%MvSڛ 2%˪F2,ZY1(X, 4 GHh˩_y?%6E5S&QY[_ߔǵ{Uޯʙc͢fcz$½<նF߬/䏻,jjR.ӏ7fN#\< o̊2.G=F,ԧ Ьy; #1r\/q#pQ媻\<܌TrHIttA8!R90!#PtE Bب1xzFM<=RjKcjLA\JأvQKcclY%LB`ݘ0I((W =vzu䎦ǴIp,N9'Ċւ[ '׏ :O gתWЀbK}b"KuwWpuV.cܽq+T[]عRIـR1[׽HfI RɦR=k|m6N!d ) PG5rH @FH5rXK"Q#KwrG_ #Ė!)N'6/ک_ֹ9z8 4<QHNo<܌T Q._T4!PbDFfNTFKL@8RcV?؞&dMOaC܏%Qk +& HDyl frM3K;D#0׫TV =*f BjVYŠg?{lWE9+ʏAa6UJHQɐI*'5R5̅HHȦBsUs))0[TjL6;{) x($3EUܠRuE.KP 1 + QZEh䠖H 5ƪ\ jkDNحo3(ŵ ȪT*%]&&Zݩ#(k̚P6T4+\ T`ѠpaCdPDLPd(cS ٛkK FB-X R~nXyJP;G+!Gfw 1戎7 rGd0j?zAsCF;ww:J@9]gƈ-brLS?O1MLj5QuzwF/z^yLj!\NLrEE6DEZp%Q$ ǾZgL/BT^g!aZ)oB<܌TbNd>v_ny[tty-*XdbLv\u(}QL1dT&L%'ьqocHqd' h>mۻΙlܴaGM AXe e{RM#!uMQCYi؞dB QP7Fc⎗D󢻷ZR} #~$T(>w3/35Y(\&S6}+yu.E?`2azq vE`R,Ej1__>Q'4N3*GI%OOl$~ cTW9YxEݢ1Tdv[JoUβ\Ur*ύ#>f[ H O Ļ`tv+nFƒJL~!QT3O75VxZH\{d`q[=j{Mm?Xd$=,WiO )d6:Ox8"!9)TpI =s5+0~ʉn_~V[@d(t#,p4gnpZФH PFe 7%2 " p_&]+?`cJ$fV%J*]6kid %a!7@D&:2ɣ16_,C#hiVw! h"~șttl 7bkVKhoVS?rro{B}w}SޖrGl߯ʙc 5 ucK}$-S}xM].oohS8+˃mI3sG7sd87 $0I`R&'A%-60 ]B<\`1\qqv|f0n*x=AeΆ\M?`7llgI0 c6 d}S0'Gx9p]eqxDu耹d~:;)RU;qv"m0) xsvlةGh"Ӊk%L+={%@^ : 6'Isݭ™utP0wępv[]5D$[87Eb$it N0X@B̐[ta`&~|o9KӐYP -0iɼpK_\: ޚ?oͼ\ǵ4:lVKQV?to1Ɣ F0Ni[+dhrD- ]y2X/߽X/e(=nVQem~6QdˆS'ʯ~_էɏoֱFxt|e\͍_/j4֦\eX/ʞD+ع_X֐}ぐaJ1k.G{֮dG3~L\_]4ʧᱲ/%!b:n1jM9<[ ]4Ƨ\NɅ(fT!2u>{:l/sHGcݱ Cў8|.SS%t7nN;BۀWcS/+zFhw21UЃ[Ŵj]l1^W2MT \%u#DPүtDGwZݭ {,"D>K ?vӳe Zea Yp_eA~ο+syχ ៿+x;.(O٥ui3vCs& F5LaȩZj1GK!/+"$G T1%JjJj$*^to5TE̷l,.-n2_}jnQhT&@Gv(HzfegmTڊe-ɉ;~ANK+ǖt/#@ڬbF@QϦi|>!-{qn+)M!-EfCK@"C+_!) TTdȸia[a ?}; ?6ᇴkw8Y7Zܛ_0$xǿɉyjJDC ݘ܏!L5P'k^ggwEN|y6m+uxչ &/G(}>+aAuS$Tٔ%aPcIƈao!NXDhvMQ؈XXP6)Tc U6=aZ=^oMf1>dE)7ÌLQ,m^p'șșdYH Tb@*%D|IUPv:b[Z2;`9;9;,rZK)Zq\=(v,&+FK ̵Rιȹ" Rɍ4ej69@Ʋ 2;m/.?|wC.=xe!X'vf<;wrĕ 4"ON=w}A[t(`S~W^k131Swr}}ʃk}?~ #ٿcם|X MɱV36Szs?M՝t=M}FةM˧.Nbِa& fqnݞ^rCD&'ߟsi\knxq@O3#Mh=?jN#>np3G }0Q˕:F gYor1Oya,>/a D 5~X3Wmxݻc*Q^v 5jF o^7~y^`PDoΩ^Tya%m!ܯ=nsn@wo4exX`yuw|Zܷ~Gd؈xwұ} ^6e2_d10!vjKz˸}k[TH[zwy!?_MGR칑`㧻^lrP LA6U>ˠ)`s&8h\+!Rl&|96B|+e{gã0mQk)q8֣7t$; F-p%mW$ݼ~ niMwV#AzcV!>A;7Jed&3RѥXEYa$[t7#8&U Ї%甭UBFaAy/0a 6nam0)0maJ^;Oo%% Ry?y|Z?NtkJ]rBCSg(E|iEWm_ O? FݱtyHgI}:ZN4Zl>Yeɣ`B\su^D':g5PWjv1))(\/Tq&[ nr߷藋vMD>ܜٛI 8֤QqFUk;×-m#MbK߳M<&hŧIeR26Ejb;}I%$tq9޳pQSgp.&1J$#y7bb=[y_*p"nKy}fWӤ5-qE% $D$O KxQ2slM>'R|ѐU.(FYкk.Ր'<,v^]._w9PArq3DR4Z꫊HWB\/]0P02 ;/徢!6Ttu}d4Y} bMQ+c+AAޗ޼^_y"RII<{4?IAQHYRdg6k 1f#O'>RH916XSNxnE!#=/Py-E8 E d(}t1ɾ6m[T!:X%@^ i 祁HhH|s|t-U|A7/vN}HPƋz:]|.s}df>}z瓾MeWUBd>V@8ov.kŲÅ7WE端TKT9/&TC[T@8^]34* =Z~2V?妐؃fRVj£>!AWц$An}3tuʳj3?W7^"#*1mv8)96\Yy,ΙÙǾp qu|1w.Wf=ƈ{,0W۬vC?\xk~iWon&Ys!ej7X%~\oUۇ<^PW;CUhnܗ_/Ǔ=WsPd; Lg~}|*_=|b}?EmZo˦edqtZi-^~ ԝ\}uHɹ/#OL?]g뙌< ?pK# ypīn~+ pwu)aBqa>,\rR437-x4s8h6$TfԏaEJo5Jlr 8~]ŗ)&Ŀ bV2o~}Vr4O/kx(`nAW Ge9;F==Z6\[/V,8^MY?'UCJb_jcSg 2o4VcMg I~˧S\]l㘾Z_؏k so\ @T3X[L"Y^AH6W51 Yo5@4%UѠ5-Aae]jHCC?^ U,M d! F0׷r&} }!Ȩ7v|8M5홾/nrVi_N r <[N gʚ؍_a%T@bilR9ߓ$*h=$-um )R )Z֋mC F4^xGeпcn+熜IbO\)8%VA |X%aEoqq(Q=旮EY6!'aF FS˭XaDlYR8RT>,4q|ȀWZ X* n߭J<:)?2VLDq1R@Q+N,RaâFtl0B qA>'qk/jP В6CD=bnsj}1iWƣi'9~.1isWO#Uٯv)m#6lr}.ZPqOuKp1RCJaRVT]injTpM_OCKBOީ]?~h;{EY)'I<~j~ 再kz2:ZvGZyNOJ_?}z[۳_6Ь^+q7?p'!qp >Bk&͑PK TYm#[nސCdʤ]a8;`o?=E?~qqqq"o%," H3-yǘك& (|BB_JoooS4< ɠ >Vg[0Mi n^s0?I~O+֡R SI(^|qGM||]Ys9jkVL-TT6*tJٔ2\j?Srt )x }]v쓣y3,*: POkJvdP !s"(i ")1s0] >k9,jRTXR24CԂ|Yg}Y+aw_ֻ ن,UCt),lQi-@%+7l )86i 80"Ò[Fw@RѲ<7-9oBL`ӊUY)(/E5Rc{jJQXYCK.7ִgkAgt58T*y)Ù9va_D1Y`|0Ra@~CE"fN@&DE0j0\[FkZZѨ3o.OfIQI<8ƔHk< t'HL.$ ~:ǔ.U@ka S8460YKdL⣖TX< A H!Zx󁀊p~%_i'K ^s*nj $d $ 93*4zŮ()GAx JDB)E8$ 1$1ReѣbB 1cj+1\5:Gj8heT(pX90tz wʝ< 6%pʒMۣ6b Vd+6qHLߞu>i_NZo gwhl+$7!M/ Wkwe^i=ֻ(4.tB.q[%hfWƄq"'!,FNBy1$\ԥLM2z1`{`wS88[r@|C  .:3+Sޭ[fh9ZrY1_3Yq ^b2\c [E0 %}NP񞂏USs PZ~[lun/n%-g#I>wHrU_)YiE IRc܄3JXghd"1z*Y&Tt[VPrGbIBsčGc5ǵQs+U Ҭ1 KUXP8J53æ4[l %ĸ(ҦDݥ:s7m'I ã"=kVY*Yr%jp9:]nE>`E΃@ d<5y[Ηf}Pv1JۣUX\Z-Q Jo{9Sq4~g ̿;ˠ$e¯#ꋮXzSiV}1l~0Bim{ܡ8X$U0Wn/cZ/=YXOL~5U.yC~UML;xg eP5]{Iz>,7N6%G=ح NwnϺcnݙ?z>,7j j(bUi3Of_gt>U"T*4kL&ى>[DLj">= <6T4"oTYF)?7G Oh9 v1OicũJ4дN%~HUy+А*#%?\tv&ueyQJC|0 ]C1ݹacMZؔ+gBrY*0J){q8ǘMZ!^ ō-%@@Gv(.09:0Pd2 7Z|fKӃV1hϲ*<-\d#Qq#ΙsԜS3{M-+4aRE^Rb04vnHUH+QtdVgQ8;:r.9^`H?fT)?N4Yy(nK9`ѱp>k,L*7h)72.+C )wK|oϰ:kjD rMݙYq;d^!Xc숀6/=uC>l|C>KzS483|ךڭlTS{WSɾu Qo?6_@5.qykѰbH1ݙ|J֒м;>C tTؙpp\?.~a{Խ$l$ 3Nw.5\nv]g*_vD 6Ӛó2j#;oARӗ rC?:AȆ ̊ѷXA%QS|I kYq3bTMU/׹m;}dτ`,!2 ((>3vc~;?rl(-aכW*)8-K2 g莭1CбJ`mD0NX +2èԝż+RV@b5ɓPPWB~@p"'U r[#zf40`%pv+hh0r26g#'JH@/s\$e]nBWB|psfOqVYfɖBXC0钎T{{vEZ˙!ҲH2.1 2 qBQxM.`$7_vŪYptV܅Sr\&TH4K\33A厠 =V_ψcIBosKuu=78؜yW1%*ر[MR҈!g(S԰j18L/i,pB>WNQҖ+,3apJXBGѝ{/2ո=}?zL\ bG+]jxy^T!T+ݵ9mP?JjHG;w>%naҨsWkcT41 JP-v&9$NIf,;yHQ;ȪL 1{`:J0i+U *[}{[WL OE1ϕY۟G?TT⌑~ XX02Kr*r$6SgesdSX5z)|݂YY~*riWdV H)[ɥ8k0v*CQC{*rE&buX7Ԛ}mm/p:-e>&:"WJG#їK >VNPjY昲F#o-0 *@>c WtkSU Ka*݀8YLs [1oBagLQ M<lJ@ cULs)"@χUtS?v8,FVޝjtѠOr!' ^Pt|ψ`qo֓>LSMwKw' QqT]LF*}ep$v\RQ6 h;rk$L(R$29zDQSfA }@-ETdMuLKc١Rt\DӒ\Z"Eɓab$;ե ۱ F%Lei ^c .p4Β "$N(EwmT#^H%, (XQ>Mt~-fH ,dF'NI#F2!?eyPGk%.JRȧE ҔxH!$Mg9>stXjW.k3EG75WꝻ4F`{K;C:OioAO'?OOgiTD}q|̅S/48t,\n>N|y$)^ƹȥl5WrVt͂ע5rWi~prt w@vhK]y&Ousn%5,/N6"4H+=h iy\7V,rGRPJNh@'dDyG5Fɺ^i&Lׂj-NSctc!SE.c}Ȝo0!)BXmwYP;IkbpJFe2ɉ!Υ‰zF_[ XTrGW4%'MQNvRp5B$LSL8C$"Rւj)kFI^SDdgg wTJ;XLQF9`QChG!_"=Lyr?aU(O G7~oV#75AЊm}kwgun~D_lX!uCWxmjKͤjǛn@u?׉b>XY)J~Q, Hk4͚XmP9_.]ݫ"RoBMwgdCf~[nk e=d `J0^~Ny]%~^+L)φntUgW>M^.-߬FmH탥~#wQ&[Li%F2u3s\.J|Ϭ0y]ڜ5l{[3~ZxmojbJ2mFogA˜^.+@%fhp!3G1WDcÃK\RFn`K/#GIg4(7_2CZ#(2X_~rKY0W-^. Zy9I?:9FFn:P{<2aV{k<Ғ!G 4Valx4!x/@NޢOjxH[-7l}D0" į^A0PKPܠvB9271vJ?ÅkpvJ9;OGzr{y3H֝dX_; U1&d%d&)t̒]+Zy}%oo݁.EA_АZ @y#NioFfBA&7~jvC2R[u+quٶiu&x:X1jKb ]!+iFxoz]VwVw _<߼»%T~d_.RXJkZ!-nw޿鸋~hnx~T[0FszrwShK%1-=e*)].@/}> )F9E_25۲'uom0 >o>-S-WRZly0W*Bu}sAAA>Z.TZ]-FeikT#fIA$חFgi/OgJ,UE#&E(MPIdz.4пx>dgZĚ5ڲο^-_1uXR^zQ4]|EFg:;#l˪2 [/\ɚllN8.y\5_MsޚѿڳTb3ߥYM[iW6:[ .u_úŠ}6+Y)YwZZ.4䉫N5U nĭ}2.uz9Jh7\.>`$[nSá?^z"6JV^gYG37Q.xc~Q\U6e!N:n2 %.K:\ria1nJEv Q6 M]nGvލr{`{ܷ.DCEaօv v ]-&Bȭ{o Պv WPv0 mmw]?kV[yk+m%- {?yn|`Kq3hgOTW.No|OgUivAy}v]4b>>K/n/WwTӪ?-$R ˭;*avuE]]|UB`S(HAH=;b =[M)zG] C4S c]輦wKAtR#ƻ;qM-$лuwLiwKAtR#ƻ h]$w}{t@B9D0H@tH4%醼HyP/^ C6?hם#)н jꠋN7OM=Mt.Mt`4GIv͌{tKMR%sFĮYƏ *e &W&Lj-sjbG9: 7E*FF)d$/mi,p,L$`B92-P\H<ˣ%٣ũ@h(҅>&b\LsT eF E,fX*e0L-!J[QKS()5u_8\P綝nc遅C/Z@Q15nՂ**"~qNPɥz]RHCyC ; 5aBJ/nJ^<Ur;$Ϥ|7~'$?!P^2Mqsw;/n~S[[{ѹ_Lw[-Unzt{Ϗ>8],bvÊҗM&᣽m>QlnWCQ~Y%YrdAj^Ի'[5J_}<{R WeLE <#*3p~3#Bsm UE浰?/׏\g!Lb ؆!JDNVkbQTmߕ9v\0rNQnE KgcDgD7n[Ƙpg:|fԺz}?zu}R7\I6ۊ>{K ZPAH-.6m`_\xKėh_9:l{l*yF٧j}گʚ|0{-ˊ׭?ߐ/{@8`Wk=v%Oq߸Rljn MN_p4ږʃ32P:ɋ= -&V $+tvki2d "WP{:Y٭^srG3^g4R2`ЌW!z `-N2p8xDŸ4%=RyBQ:w i"I P{+mIBST$RJd l­ .3| z^=2F` +OfJQzzҊ&'f-ԊmUT Px?Nq=25/ÃȈ #zzp+1qb/GpVGFWq┃@k 6 U!4$`M ʳ2ĮK#U Y瀔1g$vR'4#FB#Sg>eΖ+[RfC_i$a!7428x/Hb[j)|ԩ`n˅TT7"Ө4}/]~Q^2-KFVyDQgF\ŽI6Rc0XI8D$퇰$ncϒ&rH:ޗJ=i BP ز4X3ds!.$PP.B1/ &EfS&)Ef$AcEmU1DG8І^p1hE8<8@M@}0 ;K%>>+R^p#W[bHGfM :.@8Gۑ&kiGĎ 5N7î"uX{r{}>ݞ7X*26-_rk%Hp+~jH[%IJZ&8,LQ38%ST-'l0frTtNjp^oͅ@4qm8^Y== /ME%bb/f8+Kg?.PWW1# (d0O%H=[ѠKr.@*b; L'Y3'QN<HHTNp1(E>]bC.!oQ =K Z SEv6S)zS֥‹*9 Jjd#G)L4s&cRSN(=fs/x(=,ŶHk,'#JAtRJI}-5rO(=jRR{IRF)a(eBiƘ(e2 XvBѡ9 ɹTUǮb{WؑZ|鑣4谸z[2 4F FiЩ~\ڶAOR'7JG'Ra(]Ij2' JnY[var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005177234515145031526017715 0ustar rootrootFeb 17 08:40:50 crc systemd[1]: Starting Kubernetes Kubelet... Feb 17 08:40:51 crc restorecon[4680]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:51 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 08:40:52 crc restorecon[4680]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 08:40:52 crc restorecon[4680]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 17 08:40:52 crc kubenswrapper[4813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 08:40:52 crc kubenswrapper[4813]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 08:40:52 crc kubenswrapper[4813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 08:40:52 crc kubenswrapper[4813]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 08:40:52 crc kubenswrapper[4813]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 17 08:40:52 crc kubenswrapper[4813]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.829003 4813 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837120 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837873 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837886 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837893 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837899 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837906 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837912 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837918 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837924 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837929 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837935 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837940 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837946 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837960 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837965 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837970 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837976 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837981 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837988 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837993 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.837998 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838003 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838009 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838016 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838023 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838030 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838035 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838041 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838047 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838053 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838060 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838065 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838072 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838077 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838082 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838087 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838092 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838097 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838102 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838108 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838115 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838122 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838128 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838134 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838140 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838145 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838150 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838156 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838161 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838169 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838175 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838180 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838185 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838191 4813 feature_gate.go:330] unrecognized feature gate: Example Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838196 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838201 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838206 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838211 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838217 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838224 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838230 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838236 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838243 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838249 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838254 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838259 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838264 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838270 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838275 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838280 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.838285 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838498 4813 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838521 4813 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838533 4813 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838543 4813 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838551 4813 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838558 4813 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838566 4813 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838574 4813 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838581 4813 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838589 4813 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838596 4813 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838604 4813 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838611 4813 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838617 4813 flags.go:64] FLAG: --cgroup-root="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838624 4813 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838630 4813 flags.go:64] FLAG: --client-ca-file="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838638 4813 flags.go:64] FLAG: --cloud-config="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838644 4813 flags.go:64] FLAG: --cloud-provider="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838651 4813 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838658 4813 flags.go:64] FLAG: --cluster-domain="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838664 4813 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838670 4813 flags.go:64] FLAG: --config-dir="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838676 4813 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838683 4813 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838691 4813 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838697 4813 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838704 4813 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838710 4813 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838716 4813 flags.go:64] FLAG: --contention-profiling="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838723 4813 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838729 4813 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838736 4813 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838742 4813 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838751 4813 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838757 4813 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838764 4813 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838770 4813 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838776 4813 flags.go:64] FLAG: --enable-server="true" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838796 4813 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838804 4813 flags.go:64] FLAG: --event-burst="100" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838811 4813 flags.go:64] FLAG: --event-qps="50" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838817 4813 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838824 4813 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838830 4813 flags.go:64] FLAG: --eviction-hard="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838851 4813 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838857 4813 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838863 4813 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838870 4813 flags.go:64] FLAG: --eviction-soft="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838876 4813 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838882 4813 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838889 4813 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838895 4813 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838901 4813 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838908 4813 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838914 4813 flags.go:64] FLAG: --feature-gates="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838922 4813 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838928 4813 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838935 4813 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838941 4813 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838947 4813 flags.go:64] FLAG: --healthz-port="10248" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838954 4813 flags.go:64] FLAG: --help="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838960 4813 flags.go:64] FLAG: --hostname-override="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838966 4813 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838972 4813 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838979 4813 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838988 4813 flags.go:64] FLAG: --image-credential-provider-config="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.838994 4813 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839000 4813 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839007 4813 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839013 4813 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839019 4813 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839025 4813 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839031 4813 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839037 4813 flags.go:64] FLAG: --kube-reserved="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839043 4813 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839049 4813 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839056 4813 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839062 4813 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839068 4813 flags.go:64] FLAG: --lock-file="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839074 4813 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839080 4813 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839086 4813 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839096 4813 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839102 4813 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839107 4813 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839113 4813 flags.go:64] FLAG: --logging-format="text" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839120 4813 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839127 4813 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839133 4813 flags.go:64] FLAG: --manifest-url="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839139 4813 flags.go:64] FLAG: --manifest-url-header="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839147 4813 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839154 4813 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839162 4813 flags.go:64] FLAG: --max-pods="110" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839168 4813 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839174 4813 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839180 4813 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839186 4813 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839193 4813 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839199 4813 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839206 4813 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839221 4813 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839227 4813 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839234 4813 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839240 4813 flags.go:64] FLAG: --pod-cidr="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839247 4813 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839258 4813 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839264 4813 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839271 4813 flags.go:64] FLAG: --pods-per-core="0" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839353 4813 flags.go:64] FLAG: --port="10250" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839361 4813 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839367 4813 flags.go:64] FLAG: --provider-id="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839373 4813 flags.go:64] FLAG: --qos-reserved="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839379 4813 flags.go:64] FLAG: --read-only-port="10255" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839386 4813 flags.go:64] FLAG: --register-node="true" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839392 4813 flags.go:64] FLAG: --register-schedulable="true" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839398 4813 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839409 4813 flags.go:64] FLAG: --registry-burst="10" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839415 4813 flags.go:64] FLAG: --registry-qps="5" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839421 4813 flags.go:64] FLAG: --reserved-cpus="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839427 4813 flags.go:64] FLAG: --reserved-memory="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839435 4813 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839441 4813 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839450 4813 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839456 4813 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839462 4813 flags.go:64] FLAG: --runonce="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839468 4813 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839475 4813 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839481 4813 flags.go:64] FLAG: --seccomp-default="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839487 4813 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839494 4813 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839500 4813 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839506 4813 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839513 4813 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839519 4813 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839525 4813 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839531 4813 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839537 4813 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839544 4813 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839550 4813 flags.go:64] FLAG: --system-cgroups="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839556 4813 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839566 4813 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839572 4813 flags.go:64] FLAG: --tls-cert-file="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839579 4813 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839587 4813 flags.go:64] FLAG: --tls-min-version="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839593 4813 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839598 4813 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839605 4813 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839610 4813 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839617 4813 flags.go:64] FLAG: --v="2" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839625 4813 flags.go:64] FLAG: --version="false" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839633 4813 flags.go:64] FLAG: --vmodule="" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839640 4813 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.839647 4813 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839799 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839806 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839813 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839818 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839824 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839830 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839836 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839841 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839846 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839852 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839857 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839862 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839868 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839873 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839878 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839883 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839889 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839894 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839899 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839906 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839913 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839919 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839925 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839932 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839938 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839944 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839950 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839956 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839961 4813 feature_gate.go:330] unrecognized feature gate: Example Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839968 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839974 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839981 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839989 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.839994 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840000 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840005 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840011 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840016 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840022 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840027 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840033 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840038 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840044 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840049 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840054 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840059 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840064 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840070 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840075 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840080 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840085 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840090 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840095 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840101 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840106 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840111 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840116 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840121 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840127 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840139 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840145 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840152 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840157 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840163 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840168 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840174 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840180 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840186 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840191 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840196 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.840202 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.840219 4813 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.854284 4813 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.854344 4813 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854443 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854452 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854458 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854464 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854469 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854476 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854484 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854491 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854497 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854503 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854509 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854514 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854520 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854525 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854531 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854536 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854541 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854546 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854551 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854556 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854561 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854566 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854571 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854576 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854581 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854586 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854592 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854599 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854607 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854612 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854617 4813 feature_gate.go:330] unrecognized feature gate: Example Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854624 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854631 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854636 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854641 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854646 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854651 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854656 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854661 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854665 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854671 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854676 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854681 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854686 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854691 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854696 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854701 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854706 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854711 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854716 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854721 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854733 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854738 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854744 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854749 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854754 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854759 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854764 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854769 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854774 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854780 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854785 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854790 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854797 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854802 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854807 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854812 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854817 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854822 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854827 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.854832 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.854841 4813 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855006 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855016 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855021 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855028 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855033 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855038 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855044 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855051 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855057 4813 feature_gate.go:330] unrecognized feature gate: Example Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855062 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855067 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855072 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855077 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855083 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855088 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855092 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855097 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855102 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855107 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855112 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855117 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855121 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855127 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855133 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855139 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855144 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855149 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855154 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855158 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855163 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855168 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855174 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855179 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855184 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855188 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855194 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855198 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855203 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855208 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855213 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855218 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855223 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855228 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855233 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855238 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855245 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855251 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855257 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855262 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855267 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855272 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855277 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855282 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855287 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855332 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855338 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855344 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855350 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855355 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855361 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855367 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855374 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855380 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855386 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855391 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855396 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855401 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855406 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855411 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855416 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 08:40:52 crc kubenswrapper[4813]: W0217 08:40:52.855420 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.855429 4813 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.856381 4813 server.go:940] "Client rotation is on, will bootstrap in background" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.863574 4813 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.863770 4813 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.865653 4813 server.go:997] "Starting client certificate rotation" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.865706 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.865953 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-27 12:07:03.996114251 +0000 UTC Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.866071 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.891172 4813 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 08:40:52 crc kubenswrapper[4813]: E0217 08:40:52.893945 4813 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.894743 4813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.913840 4813 log.go:25] "Validated CRI v1 runtime API" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.956994 4813 log.go:25] "Validated CRI v1 image API" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.961723 4813 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.968092 4813 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-17-08-37-07-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 17 08:40:52 crc kubenswrapper[4813]: I0217 08:40:52.968143 4813 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:52.999795 4813 manager.go:217] Machine: {Timestamp:2026-02-17 08:40:52.995867569 +0000 UTC m=+0.656628842 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:2e490fc5-8f26-428d-b89b-fef6c7566c17 BootID:638419d1-5faa-4f84-9c92-7db1de46de03 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:03:91:39 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:03:91:39 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:bc:de:79 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e3:3c:da Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:17:75:fa Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:42:78:f4 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ae:90:8d:31:30:a4 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:b7:2d:69:6c:95 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.000195 4813 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.000428 4813 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.003379 4813 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.003730 4813 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.003791 4813 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.004116 4813 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.004136 4813 container_manager_linux.go:303] "Creating device plugin manager" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.004954 4813 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.005012 4813 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.006374 4813 state_mem.go:36] "Initialized new in-memory state store" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.006533 4813 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.010535 4813 kubelet.go:418] "Attempting to sync node with API server" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.010574 4813 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.010622 4813 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.010643 4813 kubelet.go:324] "Adding apiserver pod source" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.010662 4813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 08:40:53 crc kubenswrapper[4813]: W0217 08:40:53.017359 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 17 08:40:53 crc kubenswrapper[4813]: E0217 08:40:53.017498 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 17 08:40:53 crc kubenswrapper[4813]: W0217 08:40:53.017461 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 17 08:40:53 crc kubenswrapper[4813]: E0217 08:40:53.017593 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.019878 4813 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.022144 4813 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.024640 4813 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.026454 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.026498 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.026514 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.026529 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.026552 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.026567 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.026580 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.026603 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.026621 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.026636 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.026681 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.026699 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.027852 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.028649 4813 server.go:1280] "Started kubelet" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.030010 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.030218 4813 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.030242 4813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 08:40:53 crc systemd[1]: Started Kubernetes Kubelet. Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.031194 4813 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.033661 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.033862 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:23:07.224000863 +0000 UTC Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.033965 4813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.034304 4813 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.034354 4813 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 17 08:40:53 crc kubenswrapper[4813]: E0217 08:40:53.034438 4813 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.034504 4813 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.034573 4813 server.go:460] "Adding debug handlers to kubelet server" Feb 17 08:40:53 crc kubenswrapper[4813]: W0217 08:40:53.035276 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 17 08:40:53 crc kubenswrapper[4813]: E0217 08:40:53.035426 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.037224 4813 factory.go:55] Registering systemd factory Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.037268 4813 factory.go:221] Registration of the systemd container factory successfully Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.037752 4813 factory.go:153] Registering CRI-O factory Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.037783 4813 factory.go:221] Registration of the crio container factory successfully Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.037933 4813 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.037969 4813 factory.go:103] Registering Raw factory Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.037996 4813 manager.go:1196] Started watching for new ooms in manager Feb 17 08:40:53 crc kubenswrapper[4813]: E0217 08:40:53.038396 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="200ms" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.039258 4813 manager.go:319] Starting recovery of all containers Feb 17 08:40:53 crc kubenswrapper[4813]: E0217 08:40:53.046002 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894fc054525f3cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 08:40:53.028598733 +0000 UTC m=+0.689359986,LastTimestamp:2026-02-17 08:40:53.028598733 +0000 UTC m=+0.689359986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.060434 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.060995 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061029 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061051 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061074 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061093 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061114 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061137 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061168 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061200 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061227 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061254 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061276 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061299 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061388 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061409 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061468 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061489 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061516 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061577 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061671 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061704 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061731 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061758 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061780 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061801 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061829 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061882 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061904 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061922 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061941 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061965 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.061995 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062020 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062050 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062077 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062101 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062124 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062150 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062174 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062199 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062225 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062250 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062278 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062303 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062390 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062411 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062429 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062450 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062468 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062486 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062504 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062544 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062568 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062588 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062609 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062630 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062649 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062667 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062685 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062705 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062723 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062742 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062762 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062783 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062803 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062823 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.062841 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.065869 4813 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.065915 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.065941 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.065963 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.065983 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066001 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066021 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066039 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066057 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066077 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066098 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066119 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066143 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066163 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066182 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066200 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066219 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066256 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066279 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066304 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066435 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066463 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066488 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066525 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066544 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066566 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066585 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066605 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066627 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066646 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066665 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066686 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066705 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066724 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066743 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066763 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066781 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066809 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066829 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066851 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066872 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066893 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066913 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066933 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066955 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066975 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.066996 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067017 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067036 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067055 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067075 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067093 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067113 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067131 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067150 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067169 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067188 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067207 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067226 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067244 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067264 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067283 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067301 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067420 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067439 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067457 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067508 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067527 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067546 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067566 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067585 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067604 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067623 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067647 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067672 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067698 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067722 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067743 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067763 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067781 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067800 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067820 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067838 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067857 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067876 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067900 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067925 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067947 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067969 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.067989 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068008 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068027 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068054 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068072 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068091 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068109 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068127 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068145 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068164 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068182 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068203 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068224 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068243 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068261 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068280 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068298 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068381 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068402 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068455 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068477 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068496 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068553 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068574 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068597 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068659 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068679 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068730 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068754 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068772 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068826 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068851 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068870 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068927 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.068949 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069006 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069031 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069054 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069109 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069129 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069148 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069203 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069224 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069245 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069297 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069354 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069373 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069391 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069445 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069465 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069483 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069537 4813 reconstruct.go:97] "Volume reconstruction finished" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.069552 4813 reconciler.go:26] "Reconciler: start to sync state" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.075387 4813 manager.go:324] Recovery completed Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.092683 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.095647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.095764 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.095810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.097636 4813 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.097693 4813 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.097737 4813 state_mem.go:36] "Initialized new in-memory state store" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.106926 4813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.109726 4813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.109838 4813 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.109924 4813 kubelet.go:2335] "Starting kubelet main sync loop" Feb 17 08:40:53 crc kubenswrapper[4813]: E0217 08:40:53.110111 4813 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 08:40:53 crc kubenswrapper[4813]: W0217 08:40:53.111175 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 17 08:40:53 crc kubenswrapper[4813]: E0217 08:40:53.111232 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.121071 4813 policy_none.go:49] "None policy: Start" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.122111 4813 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.122155 4813 state_mem.go:35] "Initializing new in-memory state store" Feb 17 08:40:53 crc kubenswrapper[4813]: E0217 08:40:53.134811 4813 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.198299 4813 manager.go:334] "Starting Device Plugin manager" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.198387 4813 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.198405 4813 server.go:79] "Starting device plugin registration server" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.198933 4813 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.198957 4813 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.199147 4813 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.199258 4813 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.199268 4813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 08:40:53 crc kubenswrapper[4813]: E0217 08:40:53.209889 4813 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.211116 4813 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.211198 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.212201 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.212244 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.212257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.212432 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.212693 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.212773 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.213464 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.213492 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.213507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.214338 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.215423 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.215501 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.217859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.217862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.217971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.217992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.217903 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.218076 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.218100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.217921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.218156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.218466 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.218675 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.218756 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.222215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.222333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.222439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.222640 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.222677 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.222756 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.222808 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.222764 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.222874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.223812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.223857 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.223875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.223913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.223932 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.223944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.224160 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.224218 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.225611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.225644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.225680 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:53 crc kubenswrapper[4813]: E0217 08:40:53.240719 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="400ms" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.272003 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.272104 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.272128 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.272154 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.272176 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.272196 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.272277 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.272387 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.272441 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.272498 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.272555 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.272605 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.272657 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.272707 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.272789 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.299600 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.301183 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.301233 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.301253 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.301290 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 08:40:53 crc kubenswrapper[4813]: E0217 08:40:53.301852 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.374625 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.374703 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.374741 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.374891 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.374819 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.374916 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375024 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375074 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375018 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375112 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375149 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375188 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375078 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375224 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375152 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375260 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375294 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375353 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375376 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375406 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375259 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375353 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375409 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375445 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375359 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375665 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375547 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375711 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375808 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.375994 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.502370 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.503824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.503896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.503914 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.503948 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 08:40:53 crc kubenswrapper[4813]: E0217 08:40:53.504381 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.572539 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.585875 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.596171 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.622703 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.626044 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 08:40:53 crc kubenswrapper[4813]: W0217 08:40:53.626490 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-3c750618dc0a86df5b26681eb50f26dd6b735722cf5dd39dbb6ff011196e1c63 WatchSource:0}: Error finding container 3c750618dc0a86df5b26681eb50f26dd6b735722cf5dd39dbb6ff011196e1c63: Status 404 returned error can't find the container with id 3c750618dc0a86df5b26681eb50f26dd6b735722cf5dd39dbb6ff011196e1c63 Feb 17 08:40:53 crc kubenswrapper[4813]: W0217 08:40:53.630520 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-9d99143911e3e4a024f4b3decbddec80aecc3be4d8bce70aba4fc44f3c0ba8c1 WatchSource:0}: Error finding container 9d99143911e3e4a024f4b3decbddec80aecc3be4d8bce70aba4fc44f3c0ba8c1: Status 404 returned error can't find the container with id 9d99143911e3e4a024f4b3decbddec80aecc3be4d8bce70aba4fc44f3c0ba8c1 Feb 17 08:40:53 crc kubenswrapper[4813]: W0217 08:40:53.639394 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f582054f7600ab7c398a2bdb0900e6d14905999e1a29f836eb639eacbcd50792 WatchSource:0}: Error finding container f582054f7600ab7c398a2bdb0900e6d14905999e1a29f836eb639eacbcd50792: Status 404 returned error can't find the container with id f582054f7600ab7c398a2bdb0900e6d14905999e1a29f836eb639eacbcd50792 Feb 17 08:40:53 crc kubenswrapper[4813]: E0217 08:40:53.641618 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="800ms" Feb 17 08:40:53 crc kubenswrapper[4813]: W0217 08:40:53.654744 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-82a22307ba973ac4701d5e53832c9b83788198326591ab543434d38f168038a0 WatchSource:0}: Error finding container 82a22307ba973ac4701d5e53832c9b83788198326591ab543434d38f168038a0: Status 404 returned error can't find the container with id 82a22307ba973ac4701d5e53832c9b83788198326591ab543434d38f168038a0 Feb 17 08:40:53 crc kubenswrapper[4813]: W0217 08:40:53.656719 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-a54ed72972521d8068e04e0a4528afe00bd8a62609438a3fb6fd2b3ca6448aed WatchSource:0}: Error finding container a54ed72972521d8068e04e0a4528afe00bd8a62609438a3fb6fd2b3ca6448aed: Status 404 returned error can't find the container with id a54ed72972521d8068e04e0a4528afe00bd8a62609438a3fb6fd2b3ca6448aed Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.905137 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.907951 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.908007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.908024 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:53 crc kubenswrapper[4813]: I0217 08:40:53.908060 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 08:40:53 crc kubenswrapper[4813]: E0217 08:40:53.908683 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 17 08:40:54 crc kubenswrapper[4813]: I0217 08:40:54.031118 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 17 08:40:54 crc kubenswrapper[4813]: I0217 08:40:54.034328 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 20:58:34.964879711 +0000 UTC Feb 17 08:40:54 crc kubenswrapper[4813]: I0217 08:40:54.116089 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f582054f7600ab7c398a2bdb0900e6d14905999e1a29f836eb639eacbcd50792"} Feb 17 08:40:54 crc kubenswrapper[4813]: I0217 08:40:54.118012 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9d99143911e3e4a024f4b3decbddec80aecc3be4d8bce70aba4fc44f3c0ba8c1"} Feb 17 08:40:54 crc kubenswrapper[4813]: I0217 08:40:54.119504 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3c750618dc0a86df5b26681eb50f26dd6b735722cf5dd39dbb6ff011196e1c63"} Feb 17 08:40:54 crc kubenswrapper[4813]: I0217 08:40:54.123036 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a54ed72972521d8068e04e0a4528afe00bd8a62609438a3fb6fd2b3ca6448aed"} Feb 17 08:40:54 crc kubenswrapper[4813]: I0217 08:40:54.124281 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"82a22307ba973ac4701d5e53832c9b83788198326591ab543434d38f168038a0"} Feb 17 08:40:54 crc kubenswrapper[4813]: W0217 08:40:54.292613 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 17 08:40:54 crc kubenswrapper[4813]: E0217 08:40:54.292782 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 17 08:40:54 crc kubenswrapper[4813]: W0217 08:40:54.322194 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 17 08:40:54 crc kubenswrapper[4813]: E0217 08:40:54.322343 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 17 08:40:54 crc kubenswrapper[4813]: E0217 08:40:54.443691 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="1.6s" Feb 17 08:40:54 crc kubenswrapper[4813]: W0217 08:40:54.483573 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 17 08:40:54 crc kubenswrapper[4813]: E0217 08:40:54.483726 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 17 08:40:54 crc kubenswrapper[4813]: W0217 08:40:54.602132 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 17 08:40:54 crc kubenswrapper[4813]: E0217 08:40:54.602252 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 17 08:40:54 crc kubenswrapper[4813]: I0217 08:40:54.709742 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:54 crc kubenswrapper[4813]: I0217 08:40:54.711272 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:54 crc kubenswrapper[4813]: I0217 08:40:54.711375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:54 crc kubenswrapper[4813]: I0217 08:40:54.711400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:54 crc kubenswrapper[4813]: I0217 08:40:54.711443 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 08:40:54 crc kubenswrapper[4813]: E0217 08:40:54.712018 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.031239 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.034282 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.035296 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:38:41.855967548 +0000 UTC Feb 17 08:40:55 crc kubenswrapper[4813]: E0217 08:40:55.035678 4813 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.130498 4813 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e" exitCode=0 Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.130619 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e"} Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.130688 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.133302 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.133395 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.133421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.134368 4813 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286" exitCode=0 Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.134497 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.134496 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286"} Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.135875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.135933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.135957 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.139861 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851"} Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.139912 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46"} Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.139935 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e"} Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.143398 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108" exitCode=0 Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.143506 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108"} Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.143556 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.146149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.148422 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.148450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.150398 4813 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f" exitCode=0 Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.150497 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f"} Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.150705 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.154428 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.158696 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.158741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.158758 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.160619 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.160656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:55 crc kubenswrapper[4813]: I0217 08:40:55.160675 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:55 crc kubenswrapper[4813]: E0217 08:40:55.673851 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894fc054525f3cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 08:40:53.028598733 +0000 UTC m=+0.689359986,LastTimestamp:2026-02-17 08:40:53.028598733 +0000 UTC m=+0.689359986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.031245 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.035434 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:10:34.0327944 +0000 UTC Feb 17 08:40:56 crc kubenswrapper[4813]: E0217 08:40:56.046162 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="3.2s" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.158713 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e94eaf80152a93f49d2f1a1e0e90c908ff589a8333803e08fa1c1d2a13122d73"} Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.158781 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"20515bd60acb180cb02d22db9ef7b9556ca5b2747ae85d57b78afdc866987007"} Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.158803 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7afd399e353b112d2e6f6cce0e4a33a6a441dbe9aa5ed71b9a3d97dc2b5ccad0"} Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.158854 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.160763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.160813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.160862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.163062 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61"} Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.163179 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.164189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.164246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.164266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.169485 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc"} Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.169534 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54"} Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.169591 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c"} Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.169611 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f"} Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.172530 4813 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d" exitCode=0 Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.172589 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d"} Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.172675 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.174027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.174061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.174075 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.176992 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7ddb252c2ed9e53e9c1fffa1b4ff0929f35762930bc4d83d6fe9659653ed5230"} Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.177212 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.180211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.180255 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.180274 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.313080 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.314785 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.314832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.314848 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:56 crc kubenswrapper[4813]: I0217 08:40:56.314877 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 08:40:56 crc kubenswrapper[4813]: E0217 08:40:56.315283 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 17 08:40:56 crc kubenswrapper[4813]: W0217 08:40:56.322941 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 17 08:40:56 crc kubenswrapper[4813]: E0217 08:40:56.323035 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 17 08:40:56 crc kubenswrapper[4813]: W0217 08:40:56.492400 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 17 08:40:56 crc kubenswrapper[4813]: E0217 08:40:56.492510 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 17 08:40:56 crc kubenswrapper[4813]: W0217 08:40:56.523010 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 17 08:40:56 crc kubenswrapper[4813]: E0217 08:40:56.523082 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.035795 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:21:39.689914063 +0000 UTC Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.183458 4813 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c" exitCode=0 Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.183546 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c"} Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.183702 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.185364 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.185737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.185756 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.189396 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad"} Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.189515 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.189630 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.189660 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.189734 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.191000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.191053 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.191072 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.191296 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.191421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.191481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.191500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.191846 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.191894 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.191914 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.193029 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.193080 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:57 crc kubenswrapper[4813]: I0217 08:40:57.193103 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.035995 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 04:03:23.29739887 +0000 UTC Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.203828 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a"} Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.203918 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.204023 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.204035 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.203915 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8"} Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.204108 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621"} Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.205722 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.205782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.205800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.205963 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.206001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.206019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.909739 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.909983 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.910657 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.911530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.911581 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:58 crc kubenswrapper[4813]: I0217 08:40:58.911598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.036692 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 13:07:43.794793998 +0000 UTC Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.147929 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.212738 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768"} Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.212829 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25"} Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.212792 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.212863 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.214247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.214273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.214283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.214368 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.214407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.214435 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.515724 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.517431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.517501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.517522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.517567 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 08:40:59 crc kubenswrapper[4813]: I0217 08:40:59.642583 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 17 08:41:00 crc kubenswrapper[4813]: I0217 08:41:00.037079 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 10:21:26.563853978 +0000 UTC Feb 17 08:41:00 crc kubenswrapper[4813]: I0217 08:41:00.215591 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:41:00 crc kubenswrapper[4813]: I0217 08:41:00.217741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:00 crc kubenswrapper[4813]: I0217 08:41:00.217785 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:00 crc kubenswrapper[4813]: I0217 08:41:00.217805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:00 crc kubenswrapper[4813]: I0217 08:41:00.276051 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:41:00 crc kubenswrapper[4813]: I0217 08:41:00.276245 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:41:00 crc kubenswrapper[4813]: I0217 08:41:00.277551 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:00 crc kubenswrapper[4813]: I0217 08:41:00.277593 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:00 crc kubenswrapper[4813]: I0217 08:41:00.277609 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:00 crc kubenswrapper[4813]: I0217 08:41:00.984192 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:41:00 crc kubenswrapper[4813]: I0217 08:41:00.984477 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:41:00 crc kubenswrapper[4813]: I0217 08:41:00.986103 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:00 crc kubenswrapper[4813]: I0217 08:41:00.986168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:00 crc kubenswrapper[4813]: I0217 08:41:00.986189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:01 crc kubenswrapper[4813]: I0217 08:41:01.037492 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:41:58.545439271 +0000 UTC Feb 17 08:41:01 crc kubenswrapper[4813]: I0217 08:41:01.219054 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:41:01 crc kubenswrapper[4813]: I0217 08:41:01.220684 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:01 crc kubenswrapper[4813]: I0217 08:41:01.220751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:01 crc kubenswrapper[4813]: I0217 08:41:01.220773 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:01 crc kubenswrapper[4813]: I0217 08:41:01.910887 4813 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 08:41:01 crc kubenswrapper[4813]: I0217 08:41:01.911011 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 08:41:02 crc kubenswrapper[4813]: I0217 08:41:02.038076 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:52:22.879860339 +0000 UTC Feb 17 08:41:02 crc kubenswrapper[4813]: I0217 08:41:02.126985 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:41:02 crc kubenswrapper[4813]: I0217 08:41:02.127297 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:41:02 crc kubenswrapper[4813]: I0217 08:41:02.129150 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:02 crc kubenswrapper[4813]: I0217 08:41:02.129216 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:02 crc kubenswrapper[4813]: I0217 08:41:02.129243 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:03 crc kubenswrapper[4813]: I0217 08:41:03.038836 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:12:31.57603333 +0000 UTC Feb 17 08:41:03 crc kubenswrapper[4813]: E0217 08:41:03.210007 4813 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 08:41:03 crc kubenswrapper[4813]: I0217 08:41:03.789803 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:41:03 crc kubenswrapper[4813]: I0217 08:41:03.790043 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:41:03 crc kubenswrapper[4813]: I0217 08:41:03.791670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:03 crc kubenswrapper[4813]: I0217 08:41:03.791754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:03 crc kubenswrapper[4813]: I0217 08:41:03.791773 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:04 crc kubenswrapper[4813]: I0217 08:41:04.039492 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:14:46.134708181 +0000 UTC Feb 17 08:41:04 crc kubenswrapper[4813]: I0217 08:41:04.122392 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:41:04 crc kubenswrapper[4813]: I0217 08:41:04.133146 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:41:04 crc kubenswrapper[4813]: I0217 08:41:04.226757 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:41:04 crc kubenswrapper[4813]: I0217 08:41:04.228399 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:04 crc kubenswrapper[4813]: I0217 08:41:04.228457 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:04 crc kubenswrapper[4813]: I0217 08:41:04.228475 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:04 crc kubenswrapper[4813]: I0217 08:41:04.233835 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:41:05 crc kubenswrapper[4813]: I0217 08:41:05.040686 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:42:30.540343534 +0000 UTC Feb 17 08:41:05 crc kubenswrapper[4813]: I0217 08:41:05.080394 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 17 08:41:05 crc kubenswrapper[4813]: I0217 08:41:05.080631 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:41:05 crc kubenswrapper[4813]: I0217 08:41:05.082152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:05 crc kubenswrapper[4813]: I0217 08:41:05.082278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:05 crc kubenswrapper[4813]: I0217 08:41:05.082299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:05 crc kubenswrapper[4813]: I0217 08:41:05.229768 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:41:05 crc kubenswrapper[4813]: I0217 08:41:05.231058 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:05 crc kubenswrapper[4813]: I0217 08:41:05.231109 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:05 crc kubenswrapper[4813]: I0217 08:41:05.231127 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:06 crc kubenswrapper[4813]: I0217 08:41:06.040866 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:59:20.799899423 +0000 UTC Feb 17 08:41:06 crc kubenswrapper[4813]: I0217 08:41:06.232491 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:41:06 crc kubenswrapper[4813]: I0217 08:41:06.233798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:06 crc kubenswrapper[4813]: I0217 08:41:06.233846 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:06 crc kubenswrapper[4813]: I0217 08:41:06.233864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:07 crc kubenswrapper[4813]: I0217 08:41:07.032002 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 17 08:41:07 crc kubenswrapper[4813]: I0217 08:41:07.041516 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:50:10.582231736 +0000 UTC Feb 17 08:41:07 crc kubenswrapper[4813]: I0217 08:41:07.240752 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 08:41:07 crc kubenswrapper[4813]: I0217 08:41:07.240824 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 08:41:07 crc kubenswrapper[4813]: I0217 08:41:07.248210 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 08:41:07 crc kubenswrapper[4813]: I0217 08:41:07.248291 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 08:41:08 crc kubenswrapper[4813]: I0217 08:41:08.042061 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:13:30.745863048 +0000 UTC Feb 17 08:41:09 crc kubenswrapper[4813]: I0217 08:41:09.042224 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:03:26.403900683 +0000 UTC Feb 17 08:41:10 crc kubenswrapper[4813]: I0217 08:41:10.043952 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:15:42.70843384 +0000 UTC Feb 17 08:41:10 crc kubenswrapper[4813]: I0217 08:41:10.992712 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:41:10 crc kubenswrapper[4813]: I0217 08:41:10.992998 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:41:10 crc kubenswrapper[4813]: I0217 08:41:10.997158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:10 crc kubenswrapper[4813]: I0217 08:41:10.997547 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:10 crc kubenswrapper[4813]: I0217 08:41:10.997923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:11 crc kubenswrapper[4813]: I0217 08:41:11.002908 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:41:11 crc kubenswrapper[4813]: I0217 08:41:11.045045 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:34:46.110571514 +0000 UTC Feb 17 08:41:11 crc kubenswrapper[4813]: I0217 08:41:11.246959 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 08:41:11 crc kubenswrapper[4813]: I0217 08:41:11.247396 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:41:11 crc kubenswrapper[4813]: I0217 08:41:11.248750 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:11 crc kubenswrapper[4813]: I0217 08:41:11.248994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:11 crc kubenswrapper[4813]: I0217 08:41:11.249148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:11 crc kubenswrapper[4813]: I0217 08:41:11.910297 4813 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 08:41:11 crc kubenswrapper[4813]: I0217 08:41:11.910619 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.045711 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:27:11.989970589 +0000 UTC Feb 17 08:41:12 crc kubenswrapper[4813]: E0217 08:41:12.225057 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.227807 4813 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.228009 4813 trace.go:236] Trace[75445619]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 08:41:00.901) (total time: 11326ms): Feb 17 08:41:12 crc kubenswrapper[4813]: Trace[75445619]: ---"Objects listed" error: 11326ms (08:41:12.227) Feb 17 08:41:12 crc kubenswrapper[4813]: Trace[75445619]: [11.326233693s] [11.326233693s] END Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.228228 4813 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 08:41:12 crc kubenswrapper[4813]: E0217 08:41:12.230115 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.230922 4813 trace.go:236] Trace[1502011690]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 08:41:00.588) (total time: 11641ms): Feb 17 08:41:12 crc kubenswrapper[4813]: Trace[1502011690]: ---"Objects listed" error: 11641ms (08:41:12.230) Feb 17 08:41:12 crc kubenswrapper[4813]: Trace[1502011690]: [11.641874785s] [11.641874785s] END Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.231345 4813 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.230933 4813 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.231759 4813 trace.go:236] Trace[83870233]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 08:40:57.327) (total time: 14904ms): Feb 17 08:41:12 crc kubenswrapper[4813]: Trace[83870233]: ---"Objects listed" error: 14904ms (08:41:12.231) Feb 17 08:41:12 crc kubenswrapper[4813]: Trace[83870233]: [14.904642342s] [14.904642342s] END Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.231790 4813 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.263580 4813 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.293946 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47572->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.294428 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47572->192.168.126.11:17697: read: connection reset by peer" Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.294082 4813 csr.go:261] certificate signing request csr-rlkff is approved, waiting to be issued Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.297130 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.297227 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.325784 4813 csr.go:257] certificate signing request csr-rlkff is issued Feb 17 08:41:12 crc kubenswrapper[4813]: I0217 08:41:12.865860 4813 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 17 08:41:12 crc kubenswrapper[4813]: W0217 08:41:12.866263 4813 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 08:41:12 crc kubenswrapper[4813]: W0217 08:41:12.866269 4813 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 08:41:12 crc kubenswrapper[4813]: W0217 08:41:12.866376 4813 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 08:41:12 crc kubenswrapper[4813]: W0217 08:41:12.866375 4813 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 08:41:12 crc kubenswrapper[4813]: E0217 08:41:12.866238 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.113:42628->38.102.83.113:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894fc05696d3212 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 08:40:53.637247506 +0000 UTC m=+1.298008759,LastTimestamp:2026-02-17 08:40:53.637247506 +0000 UTC m=+1.298008759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.024072 4813 apiserver.go:52] "Watching apiserver" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.041607 4813 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.041877 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.042246 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.042291 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.042323 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.042341 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.042781 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.042798 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.042828 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.042863 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.043111 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.044389 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.045441 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.045758 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.045826 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 07:19:17.081333165 +0000 UTC Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.046145 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.046258 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.046626 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.046796 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.046923 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.047081 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.067480 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.079820 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.087948 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.096156 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.108040 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.119981 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.131109 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.135749 4813 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.137170 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.137293 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.137402 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.137506 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.137592 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.137677 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.137783 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.137930 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.137817 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138126 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138397 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138425 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138015 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138482 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138075 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138286 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138295 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138505 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138553 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138575 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138620 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138638 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138656 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138669 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138674 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138716 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138741 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138752 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138815 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138806 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138841 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138862 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138883 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138930 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138969 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138993 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139020 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139043 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139090 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139115 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139136 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139160 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139182 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139204 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139227 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139278 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139321 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139530 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139552 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139574 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139595 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139616 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139637 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139693 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139716 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139737 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139759 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.143251 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.143303 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.143373 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138859 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.144872 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.138982 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139524 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.139762 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.141999 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.142294 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.142905 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.143198 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.143522 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.143757 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.143854 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.143898 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.144091 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.145101 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.144268 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.144452 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.144471 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.144620 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.144803 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.144848 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.145262 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.145282 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.145357 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.145433 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.145648 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.145673 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.145782 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.145836 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.146049 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.146277 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.146566 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.146620 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.146029 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.146652 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.146683 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.146784 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.146835 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.146867 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.146895 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.146919 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.146949 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.146978 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.147003 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.147031 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.148294 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.148612 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.148671 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.148698 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.148865 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:41:13.648833766 +0000 UTC m=+21.309594999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.148899 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.148928 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.148987 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149035 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149070 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149093 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149115 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149138 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149159 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149271 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149299 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149353 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149377 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149399 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149417 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149437 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149465 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149491 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149512 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149534 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149558 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149580 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149599 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149619 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149639 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149655 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149674 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149695 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149713 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149759 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149783 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149804 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149825 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149865 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149886 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149907 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149932 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149955 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149981 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150009 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150046 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150076 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150096 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150127 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150143 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150165 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150188 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150207 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150225 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150245 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150267 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150285 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150325 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150344 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150361 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150382 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150406 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150470 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150491 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150509 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150529 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150544 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150564 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150584 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150604 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150620 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150638 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150657 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150673 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150691 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150711 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150729 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150748 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150767 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151013 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151121 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151182 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151206 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151225 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151250 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151290 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151334 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151372 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151413 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151435 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151455 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151476 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151497 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151522 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151542 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151566 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151589 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151607 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151636 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151664 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151682 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151703 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151722 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151782 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151826 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151852 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151872 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151911 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151934 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151956 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152002 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152079 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152121 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152169 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152187 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152230 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152267 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152290 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152331 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152349 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152370 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152484 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152505 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152557 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152579 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152598 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152615 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152638 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152659 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152680 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152700 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.153365 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.153392 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.153455 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.153483 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.153695 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.153749 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.153771 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.153889 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.153914 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.153940 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.153997 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.154023 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.154074 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.154098 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149432 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.154141 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.149467 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.154170 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.150215 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151391 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151786 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151880 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152425 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152604 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152637 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152650 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152882 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.153137 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.153170 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.153199 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.151899 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.152903 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.153835 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.153794 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.154483 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.154622 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.155200 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.155214 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.155480 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.155772 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.155861 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.156164 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.156193 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.156402 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.157042 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.156466 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.156495 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.156520 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.157038 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.157171 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.157968 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.158129 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.158205 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.158435 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.158547 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.158714 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.158750 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.158743 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.159026 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.159350 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.159398 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.159586 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.159880 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.160199 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.160554 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.160580 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.160428 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.160453 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.160688 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.161034 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.161108 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.161054 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.161119 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.161589 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.161650 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.161674 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.161701 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.161961 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.162052 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.162554 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.162652 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163053 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163113 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163131 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163259 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163285 4813 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163329 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163346 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163361 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163376 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163387 4813 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163416 4813 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163429 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163440 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163451 4813 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163460 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163166 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163257 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.163272 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.164229 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.164771 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165056 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165091 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165074 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165246 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165272 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165289 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165335 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165353 4813 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165368 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165382 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165396 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165419 4813 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165433 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165448 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165467 4813 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165482 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165498 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165528 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165548 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165562 4813 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165575 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165589 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165602 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165615 4813 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165633 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165653 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165673 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165691 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165711 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165729 4813 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165747 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165765 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165784 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165274 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165828 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165888 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165327 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165653 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165673 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165694 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165758 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.166104 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.166129 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.165465 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.166575 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.166605 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.166635 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.166636 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.166364 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.166673 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.166713 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.166854 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.166876 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.167555 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.167589 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.167634 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.167741 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.167800 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.167809 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.167823 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.167918 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.168058 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.168068 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.168218 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.168277 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.168364 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.168456 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.168790 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.168905 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.168994 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.169148 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.169170 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.170346 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.170557 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.170745 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.171346 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.171538 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:13.671512599 +0000 UTC m=+21.332273842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.171658 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.171842 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:13.671824297 +0000 UTC m=+21.332585610 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.172650 4813 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.173165 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.173249 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.173410 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.174405 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.174590 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.174755 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.174931 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.175621 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.175711 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.176839 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.176907 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.177425 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.177453 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.177485 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.180974 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.179632 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.184225 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.184281 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.184631 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.185115 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.186881 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.186757 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.187108 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.187452 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.187474 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.188183 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.188213 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.188229 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.188608 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:13.688482623 +0000 UTC m=+21.349243846 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.188941 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.190158 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.190735 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.190811 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.190950 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.190970 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.190982 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.191022 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:13.691008605 +0000 UTC m=+21.351769938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.191746 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.192529 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.195406 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.196077 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.196203 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.197166 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.199369 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.202174 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.202540 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.202670 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.202624 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.202944 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.202998 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.203269 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.203326 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.203796 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.206204 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.207864 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.213834 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.216055 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.229149 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.229265 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.255211 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.257003 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad" exitCode=255 Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.257069 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad"} Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.266839 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.266914 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267033 4813 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267048 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267057 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267066 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267077 4813 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267090 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267099 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267109 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267122 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267133 4813 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267145 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267155 4813 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267167 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267178 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267190 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267219 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267279 4813 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267365 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267403 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267416 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267426 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267435 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267444 4813 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267471 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267481 4813 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267489 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267560 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267573 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267582 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267606 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267614 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267623 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267632 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267672 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267685 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267696 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267706 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267715 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267725 4813 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267733 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267742 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267750 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267758 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267767 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267775 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267783 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267791 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267799 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267809 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267818 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267826 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267834 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267843 4813 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267850 4813 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267858 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267866 4813 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267874 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267881 4813 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267890 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267899 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267907 4813 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267915 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267927 4813 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267936 4813 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267945 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267955 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267964 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267972 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267980 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267988 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.267996 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268004 4813 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268012 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268021 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268052 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268063 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268073 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268081 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268089 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268097 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268105 4813 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268113 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268121 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268131 4813 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268140 4813 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268149 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268158 4813 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268166 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268175 4813 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268184 4813 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268192 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268200 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268208 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268217 4813 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268226 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268238 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268250 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268261 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268274 4813 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268284 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268295 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268331 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268342 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268351 4813 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268360 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268371 4813 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268382 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268393 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268404 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268412 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268421 4813 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268429 4813 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268439 4813 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268447 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268456 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268465 4813 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268475 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268484 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268492 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268500 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268508 4813 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268516 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268524 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268532 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268541 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268549 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268557 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268565 4813 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268573 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268581 4813 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268589 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268598 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268606 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.268615 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269249 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269265 4813 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269337 4813 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269353 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269367 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269381 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269395 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269441 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269454 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269467 4813 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269515 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269530 4813 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269540 4813 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269552 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269589 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269709 4813 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269728 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269760 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.269894 4813 scope.go:117] "RemoveContainer" containerID="5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.270010 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.270118 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.280144 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.293620 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.306968 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.318990 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.327484 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-17 08:36:12 +0000 UTC, rotation deadline is 2026-11-01 03:18:49.540728432 +0000 UTC Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.327575 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6162h37m36.213159245s for next certificate rotation Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.329691 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.356250 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.364290 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 08:41:13 crc kubenswrapper[4813]: W0217 08:41:13.367458 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-a88a70396c7e5f61c53b918297c1ea9858f236874e7f9e30ea8b265b371d5e8a WatchSource:0}: Error finding container a88a70396c7e5f61c53b918297c1ea9858f236874e7f9e30ea8b265b371d5e8a: Status 404 returned error can't find the container with id a88a70396c7e5f61c53b918297c1ea9858f236874e7f9e30ea8b265b371d5e8a Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.370906 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.672393 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.672479 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.672502 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.672602 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:41:14.672572905 +0000 UTC m=+22.333334118 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.672616 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.672686 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:14.672666867 +0000 UTC m=+22.333428090 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.672692 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.672783 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:14.67276338 +0000 UTC m=+22.333524603 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.773803 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.773851 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.773988 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.773986 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.774036 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.774049 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.774102 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:14.774085972 +0000 UTC m=+22.434847195 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.774008 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.774125 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:13 crc kubenswrapper[4813]: E0217 08:41:13.774180 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:14.774165234 +0000 UTC m=+22.434926447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:13 crc kubenswrapper[4813]: I0217 08:41:13.806974 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.046170 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:06:28.714836056 +0000 UTC Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.070773 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-qlb2w"] Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.071203 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qlb2w" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.072966 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-swpdn"] Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.073173 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.073465 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.073810 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.074784 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.075178 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.075247 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.075849 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.075910 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.077230 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.092934 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.110148 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:14 crc kubenswrapper[4813]: E0217 08:41:14.110301 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.119001 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.144732 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.175049 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176286 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-multus-daemon-config\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176355 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-cni-binary-copy\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176375 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-cnibin\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176396 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-run-k8s-cni-cncf-io\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176421 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-multus-cni-dir\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176438 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-hostroot\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176454 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-etc-kubernetes\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176528 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d02a7de9-7ac4-40c0-908d-dd8036e26724-hosts-file\") pod \"node-resolver-qlb2w\" (UID: \"d02a7de9-7ac4-40c0-908d-dd8036e26724\") " pod="openshift-dns/node-resolver-qlb2w" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176551 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-system-cni-dir\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176568 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-multus-conf-dir\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176588 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmc2m\" (UniqueName: \"kubernetes.io/projected/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-kube-api-access-hmc2m\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176698 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-var-lib-kubelet\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176797 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-os-release\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176826 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-var-lib-cni-bin\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176845 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-run-multus-certs\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176875 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr5mb\" (UniqueName: \"kubernetes.io/projected/d02a7de9-7ac4-40c0-908d-dd8036e26724-kube-api-access-zr5mb\") pod \"node-resolver-qlb2w\" (UID: \"d02a7de9-7ac4-40c0-908d-dd8036e26724\") " pod="openshift-dns/node-resolver-qlb2w" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176902 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-multus-socket-dir-parent\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.176920 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-run-netns\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.177007 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-var-lib-cni-multus\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.195018 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.214583 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.226240 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.240886 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.255104 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.261491 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.263094 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393"} Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.263783 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.265357 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97"} Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.265380 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cd4d7a8f80fae4bebaa85b97a130f9e8ba22ca13269d50f7922c68d13b79aabc"} Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.267204 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be"} Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.267227 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace"} Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.267239 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9d1df4029304e2f10480e1a2a9ce451f947a18f150a35c1a6bd40d6493d15e52"} Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.268445 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a88a70396c7e5f61c53b918297c1ea9858f236874e7f9e30ea8b265b371d5e8a"} Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.268878 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278091 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-os-release\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278118 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-var-lib-cni-bin\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278138 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-var-lib-kubelet\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278158 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-run-multus-certs\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278175 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr5mb\" (UniqueName: \"kubernetes.io/projected/d02a7de9-7ac4-40c0-908d-dd8036e26724-kube-api-access-zr5mb\") pod \"node-resolver-qlb2w\" (UID: \"d02a7de9-7ac4-40c0-908d-dd8036e26724\") " pod="openshift-dns/node-resolver-qlb2w" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278192 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-multus-socket-dir-parent\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278209 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-run-netns\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278225 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-var-lib-cni-multus\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278255 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-cni-binary-copy\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278272 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-multus-daemon-config\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278289 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-cnibin\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278322 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-run-k8s-cni-cncf-io\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278317 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-run-multus-certs\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278338 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-multus-socket-dir-parent\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278359 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-run-netns\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278380 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-os-release\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278350 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-multus-cni-dir\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278331 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-var-lib-kubelet\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278382 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-run-k8s-cni-cncf-io\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278382 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-var-lib-cni-multus\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278461 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-multus-cni-dir\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278396 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-cnibin\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278460 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-hostroot\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278499 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-hostroot\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278284 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-host-var-lib-cni-bin\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278534 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-etc-kubernetes\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278561 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-etc-kubernetes\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278603 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d02a7de9-7ac4-40c0-908d-dd8036e26724-hosts-file\") pod \"node-resolver-qlb2w\" (UID: \"d02a7de9-7ac4-40c0-908d-dd8036e26724\") " pod="openshift-dns/node-resolver-qlb2w" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278636 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-system-cni-dir\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278652 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-multus-conf-dir\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278653 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d02a7de9-7ac4-40c0-908d-dd8036e26724-hosts-file\") pod \"node-resolver-qlb2w\" (UID: \"d02a7de9-7ac4-40c0-908d-dd8036e26724\") " pod="openshift-dns/node-resolver-qlb2w" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278671 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmc2m\" (UniqueName: \"kubernetes.io/projected/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-kube-api-access-hmc2m\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278697 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-system-cni-dir\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.278733 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-multus-conf-dir\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.279005 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-cni-binary-copy\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.279088 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-multus-daemon-config\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.279607 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.292505 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.295230 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr5mb\" (UniqueName: \"kubernetes.io/projected/d02a7de9-7ac4-40c0-908d-dd8036e26724-kube-api-access-zr5mb\") pod \"node-resolver-qlb2w\" (UID: \"d02a7de9-7ac4-40c0-908d-dd8036e26724\") " pod="openshift-dns/node-resolver-qlb2w" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.303528 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmc2m\" (UniqueName: \"kubernetes.io/projected/9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0-kube-api-access-hmc2m\") pod \"multus-swpdn\" (UID: \"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\") " pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.313420 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.339385 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.357192 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.369940 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.385240 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qlb2w" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.387038 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.389408 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-swpdn" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.410127 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.424135 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.451972 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.481282 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ckxzc"] Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.481878 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.482050 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.482262 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-w2pz7"] Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.482497 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.483267 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.483807 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.489398 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.490092 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.490243 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.490411 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.491716 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.510226 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.536001 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.553240 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.579520 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.581956 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw4g6\" (UniqueName: \"kubernetes.io/projected/3a6ba827-b08b-4163-b067-d9adb119398d-kube-api-access-cw4g6\") pod \"machine-config-daemon-w2pz7\" (UID: \"3a6ba827-b08b-4163-b067-d9adb119398d\") " pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.582020 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9b909000-c40e-4ffd-b174-425ab3c9fe6a-system-cni-dir\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.582039 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9b909000-c40e-4ffd-b174-425ab3c9fe6a-os-release\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.582119 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a6ba827-b08b-4163-b067-d9adb119398d-mcd-auth-proxy-config\") pod \"machine-config-daemon-w2pz7\" (UID: \"3a6ba827-b08b-4163-b067-d9adb119398d\") " pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.582165 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b909000-c40e-4ffd-b174-425ab3c9fe6a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.582180 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a6ba827-b08b-4163-b067-d9adb119398d-proxy-tls\") pod \"machine-config-daemon-w2pz7\" (UID: \"3a6ba827-b08b-4163-b067-d9adb119398d\") " pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.582236 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9b909000-c40e-4ffd-b174-425ab3c9fe6a-cnibin\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.582252 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj6ch\" (UniqueName: \"kubernetes.io/projected/9b909000-c40e-4ffd-b174-425ab3c9fe6a-kube-api-access-dj6ch\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.582272 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b909000-c40e-4ffd-b174-425ab3c9fe6a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.582297 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3a6ba827-b08b-4163-b067-d9adb119398d-rootfs\") pod \"machine-config-daemon-w2pz7\" (UID: \"3a6ba827-b08b-4163-b067-d9adb119398d\") " pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.582332 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9b909000-c40e-4ffd-b174-425ab3c9fe6a-cni-binary-copy\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.592609 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.605751 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.617976 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.630044 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.643000 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.655512 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.671569 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.683264 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.683351 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9b909000-c40e-4ffd-b174-425ab3c9fe6a-cnibin\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.683379 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj6ch\" (UniqueName: \"kubernetes.io/projected/9b909000-c40e-4ffd-b174-425ab3c9fe6a-kube-api-access-dj6ch\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.683398 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.683416 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b909000-c40e-4ffd-b174-425ab3c9fe6a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.683431 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9b909000-c40e-4ffd-b174-425ab3c9fe6a-cni-binary-copy\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.683447 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3a6ba827-b08b-4163-b067-d9adb119398d-rootfs\") pod \"machine-config-daemon-w2pz7\" (UID: \"3a6ba827-b08b-4163-b067-d9adb119398d\") " pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.683472 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw4g6\" (UniqueName: \"kubernetes.io/projected/3a6ba827-b08b-4163-b067-d9adb119398d-kube-api-access-cw4g6\") pod \"machine-config-daemon-w2pz7\" (UID: \"3a6ba827-b08b-4163-b067-d9adb119398d\") " pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.683493 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9b909000-c40e-4ffd-b174-425ab3c9fe6a-system-cni-dir\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.683509 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9b909000-c40e-4ffd-b174-425ab3c9fe6a-os-release\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.683527 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.683543 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a6ba827-b08b-4163-b067-d9adb119398d-mcd-auth-proxy-config\") pod \"machine-config-daemon-w2pz7\" (UID: \"3a6ba827-b08b-4163-b067-d9adb119398d\") " pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.683560 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b909000-c40e-4ffd-b174-425ab3c9fe6a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.683580 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a6ba827-b08b-4163-b067-d9adb119398d-proxy-tls\") pod \"machine-config-daemon-w2pz7\" (UID: \"3a6ba827-b08b-4163-b067-d9adb119398d\") " pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.683808 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3a6ba827-b08b-4163-b067-d9adb119398d-rootfs\") pod \"machine-config-daemon-w2pz7\" (UID: \"3a6ba827-b08b-4163-b067-d9adb119398d\") " pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:41:14 crc kubenswrapper[4813]: E0217 08:41:14.683916 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:41:16.68389193 +0000 UTC m=+24.344653163 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.683914 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9b909000-c40e-4ffd-b174-425ab3c9fe6a-cnibin\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: E0217 08:41:14.684068 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 08:41:14 crc kubenswrapper[4813]: E0217 08:41:14.684157 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.684088 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9b909000-c40e-4ffd-b174-425ab3c9fe6a-system-cni-dir\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.684221 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b909000-c40e-4ffd-b174-425ab3c9fe6a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.684279 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9b909000-c40e-4ffd-b174-425ab3c9fe6a-os-release\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: E0217 08:41:14.684179 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:16.684159617 +0000 UTC m=+24.344920840 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 08:41:14 crc kubenswrapper[4813]: E0217 08:41:14.684452 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:16.684440823 +0000 UTC m=+24.345202046 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.684735 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b909000-c40e-4ffd-b174-425ab3c9fe6a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.684816 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9b909000-c40e-4ffd-b174-425ab3c9fe6a-cni-binary-copy\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.685288 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a6ba827-b08b-4163-b067-d9adb119398d-mcd-auth-proxy-config\") pod \"machine-config-daemon-w2pz7\" (UID: \"3a6ba827-b08b-4163-b067-d9adb119398d\") " pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.689716 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a6ba827-b08b-4163-b067-d9adb119398d-proxy-tls\") pod \"machine-config-daemon-w2pz7\" (UID: \"3a6ba827-b08b-4163-b067-d9adb119398d\") " pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.689975 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.701957 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj6ch\" (UniqueName: \"kubernetes.io/projected/9b909000-c40e-4ffd-b174-425ab3c9fe6a-kube-api-access-dj6ch\") pod \"multus-additional-cni-plugins-ckxzc\" (UID: \"9b909000-c40e-4ffd-b174-425ab3c9fe6a\") " pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.714899 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw4g6\" (UniqueName: \"kubernetes.io/projected/3a6ba827-b08b-4163-b067-d9adb119398d-kube-api-access-cw4g6\") pod \"machine-config-daemon-w2pz7\" (UID: \"3a6ba827-b08b-4163-b067-d9adb119398d\") " pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.713159 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.730535 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.740761 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.758159 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.784473 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.784518 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:14 crc kubenswrapper[4813]: E0217 08:41:14.784636 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 08:41:14 crc kubenswrapper[4813]: E0217 08:41:14.784653 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 08:41:14 crc kubenswrapper[4813]: E0217 08:41:14.784665 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:14 crc kubenswrapper[4813]: E0217 08:41:14.784711 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:16.7846981 +0000 UTC m=+24.445459323 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:14 crc kubenswrapper[4813]: E0217 08:41:14.784764 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 08:41:14 crc kubenswrapper[4813]: E0217 08:41:14.784818 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 08:41:14 crc kubenswrapper[4813]: E0217 08:41:14.784846 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:14 crc kubenswrapper[4813]: E0217 08:41:14.784953 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:16.784925435 +0000 UTC m=+24.445686688 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.798530 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.806760 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:41:14 crc kubenswrapper[4813]: W0217 08:41:14.829361 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a6ba827_b08b_4163_b067_d9adb119398d.slice/crio-353159b782f599451b6f9865e22aa031f65db7d93bf4a53d96d8cb80afd4eda8 WatchSource:0}: Error finding container 353159b782f599451b6f9865e22aa031f65db7d93bf4a53d96d8cb80afd4eda8: Status 404 returned error can't find the container with id 353159b782f599451b6f9865e22aa031f65db7d93bf4a53d96d8cb80afd4eda8 Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.876180 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qsj6b"] Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.877168 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.879067 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.879281 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.882040 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.882236 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.882293 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.882361 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.882299 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.897294 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.913249 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.932521 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.962658 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.985673 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-kubelet\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.985732 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-systemd\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.985764 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-openvswitch\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.985803 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-ovn\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.985878 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3513e95a-8ab1-42f1-8aa5-37400db92720-ovn-node-metrics-cert\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.985930 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-run-netns\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.985969 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-systemd-units\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.985997 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-cni-bin\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.986045 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-run-ovn-kubernetes\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.986066 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8p8t\" (UniqueName: \"kubernetes.io/projected/3513e95a-8ab1-42f1-8aa5-37400db92720-kube-api-access-z8p8t\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.986095 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-log-socket\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.986116 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-cni-netd\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.986134 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-env-overrides\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.986170 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-var-lib-openvswitch\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.986258 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-node-log\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.986332 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-ovnkube-script-lib\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.986401 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-etc-openvswitch\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.986425 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.986452 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-slash\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.986496 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-ovnkube-config\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:14 crc kubenswrapper[4813]: I0217 08:41:14.996635 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:14Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.031370 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.046803 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:05:06.306340919 +0000 UTC Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.063972 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.082011 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087035 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-ovnkube-config\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087088 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-kubelet\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087122 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-systemd\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087153 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-openvswitch\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087186 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-ovn\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087227 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3513e95a-8ab1-42f1-8aa5-37400db92720-ovn-node-metrics-cert\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087243 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-kubelet\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087261 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-run-netns\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087319 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-systemd-units\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087338 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-cni-bin\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087348 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-run-netns\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087381 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-run-ovn-kubernetes\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087400 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8p8t\" (UniqueName: \"kubernetes.io/projected/3513e95a-8ab1-42f1-8aa5-37400db92720-kube-api-access-z8p8t\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087410 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-systemd\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087424 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-log-socket\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087443 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-cni-netd\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087472 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-env-overrides\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087485 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-openvswitch\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087491 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-var-lib-openvswitch\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087529 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-systemd-units\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087555 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-cni-bin\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087563 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-node-log\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087578 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-run-ovn-kubernetes\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087598 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-ovnkube-script-lib\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087650 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-etc-openvswitch\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087685 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087715 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-slash\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087742 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-ovnkube-config\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087790 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-slash\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087803 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-log-socket\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087805 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-node-log\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087829 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-ovn\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087844 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-cni-netd\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087514 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-var-lib-openvswitch\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.087990 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.088040 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-etc-openvswitch\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.088385 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-env-overrides\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.088611 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-ovnkube-script-lib\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.090994 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3513e95a-8ab1-42f1-8aa5-37400db92720-ovn-node-metrics-cert\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.094794 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.105332 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8p8t\" (UniqueName: \"kubernetes.io/projected/3513e95a-8ab1-42f1-8aa5-37400db92720-kube-api-access-z8p8t\") pod \"ovnkube-node-qsj6b\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.105625 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.108605 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.110137 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:15 crc kubenswrapper[4813]: E0217 08:41:15.110212 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.110137 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:15 crc kubenswrapper[4813]: E0217 08:41:15.110380 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.114945 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.116542 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.117385 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.118478 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.119072 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.119995 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.120685 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.121238 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.122704 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.123419 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.124375 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.124481 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.125140 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.125621 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.126517 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.127028 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.127918 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.128552 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.128949 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.129884 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.130463 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.131336 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.131918 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.132415 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.133423 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.133807 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.134851 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.135478 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.136389 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.136926 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.136935 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.137774 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.138217 4813 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.138397 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.140046 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.141700 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.142212 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.144808 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.146106 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.146827 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.147292 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.148138 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.148814 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.150459 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.151104 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.152188 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.152775 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.157659 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.158419 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.159197 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.160514 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.160980 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.161363 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.161872 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.162361 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.162965 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.163979 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.164463 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.165551 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.165589 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.175405 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.196976 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.204350 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: W0217 08:41:15.214717 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3513e95a_8ab1_42f1_8aa5_37400db92720.slice/crio-db10204b695a5ec40f63524d829c13b4197fdeceac65069487d8d76acb908bdf WatchSource:0}: Error finding container db10204b695a5ec40f63524d829c13b4197fdeceac65069487d8d76acb908bdf: Status 404 returned error can't find the container with id db10204b695a5ec40f63524d829c13b4197fdeceac65069487d8d76acb908bdf Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.219396 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.236870 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.252448 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.271479 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerStarted","Data":"db10204b695a5ec40f63524d829c13b4197fdeceac65069487d8d76acb908bdf"} Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.272801 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qlb2w" event={"ID":"d02a7de9-7ac4-40c0-908d-dd8036e26724","Type":"ContainerStarted","Data":"0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436"} Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.272836 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qlb2w" event={"ID":"d02a7de9-7ac4-40c0-908d-dd8036e26724","Type":"ContainerStarted","Data":"2e715c83f415bd8117acc4ddf0b8fbee3411223b84f5c744e6ea1e5e4fa174fc"} Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.274188 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerStarted","Data":"ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3"} Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.274216 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerStarted","Data":"e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1"} Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.274226 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerStarted","Data":"353159b782f599451b6f9865e22aa031f65db7d93bf4a53d96d8cb80afd4eda8"} Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.275603 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" event={"ID":"9b909000-c40e-4ffd-b174-425ab3c9fe6a","Type":"ContainerStarted","Data":"d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a"} Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.275647 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" event={"ID":"9b909000-c40e-4ffd-b174-425ab3c9fe6a","Type":"ContainerStarted","Data":"1a3222cc951f274d01ed433c69be4facdf91560f82c52fb2d2be567b2a93fe14"} Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.276594 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-swpdn" event={"ID":"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0","Type":"ContainerStarted","Data":"fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc"} Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.276658 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-swpdn" event={"ID":"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0","Type":"ContainerStarted","Data":"6b8a002d52488a06f56e28527bf2d4add396130df55d1bb7a33a56efe0fc1d03"} Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.279433 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.298222 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.309716 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.320995 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.334037 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.345882 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.359846 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.371647 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.390114 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.407971 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.446509 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.481201 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.515538 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.556627 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.604384 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.644037 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.674827 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:15 crc kubenswrapper[4813]: I0217 08:41:15.717419 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:15Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.047536 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:27:57.225036876 +0000 UTC Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.053235 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qdj4m"] Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.053621 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qdj4m" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.055907 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.055904 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.057197 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.059256 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.071140 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.093541 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.099019 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rxcp\" (UniqueName: \"kubernetes.io/projected/2b8a9309-df0c-4bc5-bd41-3a54a5cd834d-kube-api-access-9rxcp\") pod \"node-ca-qdj4m\" (UID: \"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\") " pod="openshift-image-registry/node-ca-qdj4m" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.099136 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b8a9309-df0c-4bc5-bd41-3a54a5cd834d-host\") pod \"node-ca-qdj4m\" (UID: \"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\") " pod="openshift-image-registry/node-ca-qdj4m" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.099192 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2b8a9309-df0c-4bc5-bd41-3a54a5cd834d-serviceca\") pod \"node-ca-qdj4m\" (UID: \"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\") " pod="openshift-image-registry/node-ca-qdj4m" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.110914 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:16 crc kubenswrapper[4813]: E0217 08:41:16.111022 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.117230 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.129846 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.148277 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.169542 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.192948 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.199882 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2b8a9309-df0c-4bc5-bd41-3a54a5cd834d-serviceca\") pod \"node-ca-qdj4m\" (UID: \"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\") " pod="openshift-image-registry/node-ca-qdj4m" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.199977 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rxcp\" (UniqueName: \"kubernetes.io/projected/2b8a9309-df0c-4bc5-bd41-3a54a5cd834d-kube-api-access-9rxcp\") pod \"node-ca-qdj4m\" (UID: \"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\") " pod="openshift-image-registry/node-ca-qdj4m" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.200001 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b8a9309-df0c-4bc5-bd41-3a54a5cd834d-host\") pod \"node-ca-qdj4m\" (UID: \"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\") " pod="openshift-image-registry/node-ca-qdj4m" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.200086 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b8a9309-df0c-4bc5-bd41-3a54a5cd834d-host\") pod \"node-ca-qdj4m\" (UID: \"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\") " pod="openshift-image-registry/node-ca-qdj4m" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.201216 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2b8a9309-df0c-4bc5-bd41-3a54a5cd834d-serviceca\") pod \"node-ca-qdj4m\" (UID: \"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\") " pod="openshift-image-registry/node-ca-qdj4m" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.211666 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.228687 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rxcp\" (UniqueName: \"kubernetes.io/projected/2b8a9309-df0c-4bc5-bd41-3a54a5cd834d-kube-api-access-9rxcp\") pod \"node-ca-qdj4m\" (UID: \"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\") " pod="openshift-image-registry/node-ca-qdj4m" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.229435 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.243252 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.254785 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.282784 4813 generic.go:334] "Generic (PLEG): container finished" podID="9b909000-c40e-4ffd-b174-425ab3c9fe6a" containerID="d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a" exitCode=0 Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.282857 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" event={"ID":"9b909000-c40e-4ffd-b174-425ab3c9fe6a","Type":"ContainerDied","Data":"d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a"} Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.285637 4813 generic.go:334] "Generic (PLEG): container finished" podID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerID="b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a" exitCode=0 Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.285694 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerDied","Data":"b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a"} Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.287871 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5"} Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.296415 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.339722 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.384545 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.428144 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.455897 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.494559 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.532361 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.577015 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qdj4m" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.582524 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.620366 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.671205 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.694822 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.704841 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.704942 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:16 crc kubenswrapper[4813]: E0217 08:41:16.705030 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:41:20.705000022 +0000 UTC m=+28.365761245 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.705135 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:16 crc kubenswrapper[4813]: E0217 08:41:16.705280 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 08:41:16 crc kubenswrapper[4813]: E0217 08:41:16.705347 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:20.70534013 +0000 UTC m=+28.366101343 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 08:41:16 crc kubenswrapper[4813]: E0217 08:41:16.705466 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 08:41:16 crc kubenswrapper[4813]: E0217 08:41:16.705568 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:20.705541485 +0000 UTC m=+28.366302778 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.738347 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.775470 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.807042 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.807093 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:16 crc kubenswrapper[4813]: E0217 08:41:16.807219 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 08:41:16 crc kubenswrapper[4813]: E0217 08:41:16.807236 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 08:41:16 crc kubenswrapper[4813]: E0217 08:41:16.807247 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:16 crc kubenswrapper[4813]: E0217 08:41:16.807288 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:20.807275547 +0000 UTC m=+28.468036770 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:16 crc kubenswrapper[4813]: E0217 08:41:16.807417 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 08:41:16 crc kubenswrapper[4813]: E0217 08:41:16.807440 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 08:41:16 crc kubenswrapper[4813]: E0217 08:41:16.807450 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:16 crc kubenswrapper[4813]: E0217 08:41:16.807494 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:20.807480492 +0000 UTC m=+28.468241715 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.815718 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.853465 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.894677 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:16 crc kubenswrapper[4813]: I0217 08:41:16.938012 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:16Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.015185 4813 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.047982 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 10:11:08.417904655 +0000 UTC Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.110367 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:17 crc kubenswrapper[4813]: E0217 08:41:17.110588 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.111479 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:17 crc kubenswrapper[4813]: E0217 08:41:17.111809 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.292753 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qdj4m" event={"ID":"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d","Type":"ContainerStarted","Data":"3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad"} Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.292795 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qdj4m" event={"ID":"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d","Type":"ContainerStarted","Data":"41423ed2eb6c58dff337546bb1358b6cf015a3b2c033be6bed33ec1d734bb3da"} Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.294931 4813 generic.go:334] "Generic (PLEG): container finished" podID="9b909000-c40e-4ffd-b174-425ab3c9fe6a" containerID="1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb" exitCode=0 Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.294982 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" event={"ID":"9b909000-c40e-4ffd-b174-425ab3c9fe6a","Type":"ContainerDied","Data":"1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb"} Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.300533 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerStarted","Data":"2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b"} Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.300653 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerStarted","Data":"f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058"} Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.300692 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerStarted","Data":"eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126"} Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.300717 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerStarted","Data":"1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028"} Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.300740 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerStarted","Data":"1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0"} Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.300765 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerStarted","Data":"af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b"} Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.307229 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.326189 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.347302 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.364361 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.380498 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.402799 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.420383 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.435351 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.451776 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.466554 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.480773 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.491509 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.545717 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.570962 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.592534 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.603416 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.636061 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.685110 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.718292 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.761743 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.799596 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.845689 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.882911 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.918980 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.966120 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:17 crc kubenswrapper[4813]: I0217 08:41:17.997675 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:17Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.047678 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.048114 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 12:39:02.942141878 +0000 UTC Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.093710 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.111126 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:18 crc kubenswrapper[4813]: E0217 08:41:18.111339 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.308609 4813 generic.go:334] "Generic (PLEG): container finished" podID="9b909000-c40e-4ffd-b174-425ab3c9fe6a" containerID="f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65" exitCode=0 Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.308681 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" event={"ID":"9b909000-c40e-4ffd-b174-425ab3c9fe6a","Type":"ContainerDied","Data":"f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65"} Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.337899 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.357758 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.377995 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.399228 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.418051 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.437331 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.449873 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.464467 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.486450 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.500964 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.513609 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.560053 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.598599 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.631248 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.637742 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.637800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.637821 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.637925 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.638010 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.667629 4813 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.668067 4813 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.671965 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.672022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.672037 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.672059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.672077 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:18Z","lastTransitionTime":"2026-02-17T08:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:18 crc kubenswrapper[4813]: E0217 08:41:18.690703 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.695001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.695070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.695081 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.695100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.695112 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:18Z","lastTransitionTime":"2026-02-17T08:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:18 crc kubenswrapper[4813]: E0217 08:41:18.713037 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.717358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.717423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.717437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.717452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.717462 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:18Z","lastTransitionTime":"2026-02-17T08:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:18 crc kubenswrapper[4813]: E0217 08:41:18.735394 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.740489 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.740545 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.740565 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.740589 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.740606 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:18Z","lastTransitionTime":"2026-02-17T08:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:18 crc kubenswrapper[4813]: E0217 08:41:18.755381 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.759830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.759972 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.760068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.760225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.760351 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:18Z","lastTransitionTime":"2026-02-17T08:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:18 crc kubenswrapper[4813]: E0217 08:41:18.778240 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: E0217 08:41:18.778811 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.781625 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.781670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.781686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.781708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.781724 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:18Z","lastTransitionTime":"2026-02-17T08:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.885049 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.885097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.885113 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.885141 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.885158 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:18Z","lastTransitionTime":"2026-02-17T08:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.916674 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.923602 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.929118 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.937227 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.955838 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.975998 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.987854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.987884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.987896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.987913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.987924 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:18Z","lastTransitionTime":"2026-02-17T08:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:18 crc kubenswrapper[4813]: I0217 08:41:18.995673 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:18Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.011157 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.032789 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.048368 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 00:56:16.393614555 +0000 UTC Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.060144 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.080191 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.092369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.092411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.092421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.092439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.092452 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:19Z","lastTransitionTime":"2026-02-17T08:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.097489 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.110971 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:19 crc kubenswrapper[4813]: E0217 08:41:19.111128 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.110985 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:19 crc kubenswrapper[4813]: E0217 08:41:19.111240 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.118397 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.141728 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.181512 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.195512 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.195875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.196069 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.196261 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.196466 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:19Z","lastTransitionTime":"2026-02-17T08:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.217525 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.274947 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.299200 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.299266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.299294 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.299355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.299417 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:19Z","lastTransitionTime":"2026-02-17T08:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.302395 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.322623 4813 generic.go:334] "Generic (PLEG): container finished" podID="9b909000-c40e-4ffd-b174-425ab3c9fe6a" containerID="60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d" exitCode=0 Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.322980 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" event={"ID":"9b909000-c40e-4ffd-b174-425ab3c9fe6a","Type":"ContainerDied","Data":"60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d"} Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.340169 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: E0217 08:41:19.354992 4813 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.402984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.403037 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.403053 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.403082 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.403099 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:19Z","lastTransitionTime":"2026-02-17T08:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.403984 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.443172 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.478689 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.505841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.505895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.505905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.505918 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.505926 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:19Z","lastTransitionTime":"2026-02-17T08:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.518699 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.563785 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.598001 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.608468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.608510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.608527 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.608550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.608568 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:19Z","lastTransitionTime":"2026-02-17T08:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.635091 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.689793 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.711269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.711376 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.711400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.711430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.711451 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:19Z","lastTransitionTime":"2026-02-17T08:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.717976 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.762704 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.803162 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.814582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.814639 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.814657 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.814682 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.814699 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:19Z","lastTransitionTime":"2026-02-17T08:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.842499 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.875124 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.920888 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.921259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.921277 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.921299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.921336 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:19Z","lastTransitionTime":"2026-02-17T08:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.921718 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:19 crc kubenswrapper[4813]: I0217 08:41:19.959440 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:19Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.005801 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.023720 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.023762 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.023773 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.023791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.023803 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:20Z","lastTransitionTime":"2026-02-17T08:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.041091 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.046478 4813 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.048851 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:01:27.05556906 +0000 UTC Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.111260 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:20 crc kubenswrapper[4813]: E0217 08:41:20.111497 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.127222 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.127266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.127284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.127341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.127363 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:20Z","lastTransitionTime":"2026-02-17T08:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.133548 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.151254 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.183710 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.226105 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.230006 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.230059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.230079 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.230107 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.230125 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:20Z","lastTransitionTime":"2026-02-17T08:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.260929 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.294933 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.329392 4813 generic.go:334] "Generic (PLEG): container finished" podID="9b909000-c40e-4ffd-b174-425ab3c9fe6a" containerID="5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df" exitCode=0 Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.329479 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" event={"ID":"9b909000-c40e-4ffd-b174-425ab3c9fe6a","Type":"ContainerDied","Data":"5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df"} Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.332145 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.332192 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.332204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.332223 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.332235 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:20Z","lastTransitionTime":"2026-02-17T08:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.335294 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.336818 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerStarted","Data":"4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30"} Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.378435 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.418631 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.435223 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.435281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.435298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.435346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.435364 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:20Z","lastTransitionTime":"2026-02-17T08:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.457163 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.506085 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.537399 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.538262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.538290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.538348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.538372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.538386 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:20Z","lastTransitionTime":"2026-02-17T08:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.583777 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.616956 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.641014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.641045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.641054 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.641068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.641077 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:20Z","lastTransitionTime":"2026-02-17T08:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.655847 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.698219 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.705786 4813 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.743608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.743669 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.743686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.743710 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.743728 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:20Z","lastTransitionTime":"2026-02-17T08:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.748105 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.748281 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.748390 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:20 crc kubenswrapper[4813]: E0217 08:41:20.748439 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:41:28.748409106 +0000 UTC m=+36.409170369 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:41:20 crc kubenswrapper[4813]: E0217 08:41:20.748527 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 08:41:20 crc kubenswrapper[4813]: E0217 08:41:20.748526 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 08:41:20 crc kubenswrapper[4813]: E0217 08:41:20.748590 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:28.748570729 +0000 UTC m=+36.409332022 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 08:41:20 crc kubenswrapper[4813]: E0217 08:41:20.748628 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:28.74860037 +0000 UTC m=+36.409361633 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.762929 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.801006 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.841184 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.846622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.846690 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.846708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.846732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.846753 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:20Z","lastTransitionTime":"2026-02-17T08:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.849116 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.849237 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:20 crc kubenswrapper[4813]: E0217 08:41:20.849517 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 08:41:20 crc kubenswrapper[4813]: E0217 08:41:20.849566 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 08:41:20 crc kubenswrapper[4813]: E0217 08:41:20.849592 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:20 crc kubenswrapper[4813]: E0217 08:41:20.849608 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 08:41:20 crc kubenswrapper[4813]: E0217 08:41:20.849647 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 08:41:20 crc kubenswrapper[4813]: E0217 08:41:20.849669 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:20 crc kubenswrapper[4813]: E0217 08:41:20.849684 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:28.849657496 +0000 UTC m=+36.510418759 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:20 crc kubenswrapper[4813]: E0217 08:41:20.849733 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:28.849710297 +0000 UTC m=+36.510471560 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.884947 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.924200 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.949525 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.949599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.949622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.949656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.949678 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:20Z","lastTransitionTime":"2026-02-17T08:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.958447 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:20 crc kubenswrapper[4813]: I0217 08:41:20.996348 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.037438 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.049422 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:55:51.610494908 +0000 UTC Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.052867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.052924 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.052943 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.052968 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.052989 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:21Z","lastTransitionTime":"2026-02-17T08:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.090112 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.110841 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.110926 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:21 crc kubenswrapper[4813]: E0217 08:41:21.110998 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:21 crc kubenswrapper[4813]: E0217 08:41:21.111095 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.130624 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.155941 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.155981 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.155995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.156012 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.156025 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:21Z","lastTransitionTime":"2026-02-17T08:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.259863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.259943 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.259969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.260000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.260024 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:21Z","lastTransitionTime":"2026-02-17T08:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.344424 4813 generic.go:334] "Generic (PLEG): container finished" podID="9b909000-c40e-4ffd-b174-425ab3c9fe6a" containerID="254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc" exitCode=0 Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.344483 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" event={"ID":"9b909000-c40e-4ffd-b174-425ab3c9fe6a","Type":"ContainerDied","Data":"254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc"} Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.363508 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.363578 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.363599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.363629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.363652 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:21Z","lastTransitionTime":"2026-02-17T08:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.367886 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.390260 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.413672 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.432062 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.450158 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.467733 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.472220 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.472266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.472278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.472297 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.472332 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:21Z","lastTransitionTime":"2026-02-17T08:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.494714 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.525057 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.542640 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.554555 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.570159 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.579148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.579196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.579206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.579225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.579235 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:21Z","lastTransitionTime":"2026-02-17T08:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.633119 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.659965 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.677462 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.682536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.682567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.682577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.682594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.682606 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:21Z","lastTransitionTime":"2026-02-17T08:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.722397 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:21Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.784268 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.784329 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.784346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.784365 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.784375 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:21Z","lastTransitionTime":"2026-02-17T08:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.887598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.887753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.887841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.887928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.888018 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:21Z","lastTransitionTime":"2026-02-17T08:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.991137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.991183 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.991201 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.991224 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:21 crc kubenswrapper[4813]: I0217 08:41:21.991237 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:21Z","lastTransitionTime":"2026-02-17T08:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.050598 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 06:04:40.663647364 +0000 UTC Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.093483 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.093546 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.093566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.093593 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.093612 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:22Z","lastTransitionTime":"2026-02-17T08:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.110668 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:22 crc kubenswrapper[4813]: E0217 08:41:22.110856 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.196233 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.196598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.196743 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.196887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.197020 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:22Z","lastTransitionTime":"2026-02-17T08:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.299623 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.299655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.299666 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.299684 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.299697 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:22Z","lastTransitionTime":"2026-02-17T08:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.353727 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" event={"ID":"9b909000-c40e-4ffd-b174-425ab3c9fe6a","Type":"ContainerStarted","Data":"14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5"} Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.360953 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerStarted","Data":"1f0c2e9d8f6cd64979305df59205fb8aaa1be72d95890c1ec34303b7aa7481b3"} Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.361761 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.378886 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.401720 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.402759 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.403236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.403287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.403379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.403409 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.403435 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:22Z","lastTransitionTime":"2026-02-17T08:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.423892 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.444846 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.474360 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.492790 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.508824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.509076 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.509190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.509280 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.509397 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:22Z","lastTransitionTime":"2026-02-17T08:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.513001 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.529745 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.552688 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.582728 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.597608 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.612826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.612886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.612904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.612929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.612947 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:22Z","lastTransitionTime":"2026-02-17T08:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.632579 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.653644 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.675493 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.694628 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.716180 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.716263 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.716287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.716351 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.716377 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:22Z","lastTransitionTime":"2026-02-17T08:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.716669 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.731405 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.752750 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.782630 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f0c2e9d8f6cd64979305df59205fb8aaa1be72d95890c1ec34303b7aa7481b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.812820 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.818923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.818994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.819022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.819052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.819074 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:22Z","lastTransitionTime":"2026-02-17T08:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.831948 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.849415 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.865577 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.880004 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.896725 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.913895 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.922762 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.922836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.922858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.922889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.922912 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:22Z","lastTransitionTime":"2026-02-17T08:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.934418 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.950557 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.968391 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:22 crc kubenswrapper[4813]: I0217 08:41:22.985916 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:22Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.025929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.025995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.026015 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.026042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.026061 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:23Z","lastTransitionTime":"2026-02-17T08:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.051496 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:31:12.418915132 +0000 UTC Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.110956 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.111079 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:23 crc kubenswrapper[4813]: E0217 08:41:23.111192 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:23 crc kubenswrapper[4813]: E0217 08:41:23.111369 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.128877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.128921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.128933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.128949 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.128963 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:23Z","lastTransitionTime":"2026-02-17T08:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.130058 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.142805 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.175438 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.192511 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.214200 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.231167 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.231221 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.231237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.231260 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.231279 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:23Z","lastTransitionTime":"2026-02-17T08:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.231153 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.251401 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.266302 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.281087 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.317255 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.333608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.333651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.333663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.333679 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.333688 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:23Z","lastTransitionTime":"2026-02-17T08:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.352582 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.364203 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.364656 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.396785 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.398476 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.433501 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.435101 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.435143 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.435159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.435184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.435200 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:23Z","lastTransitionTime":"2026-02-17T08:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.485271 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.520297 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f0c2e9d8f6cd64979305df59205fb8aaa1be72d95890c1ec34303b7aa7481b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.537791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.537813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.537821 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.537834 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.537843 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:23Z","lastTransitionTime":"2026-02-17T08:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.571534 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f0c2e9d8f6cd64979305df59205fb8aaa1be72d95890c1ec34303b7aa7481b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.603163 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.640125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.640186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.640199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.640217 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.640228 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:23Z","lastTransitionTime":"2026-02-17T08:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.641500 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.685058 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.719074 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.743455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.743516 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.743534 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.743559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.743583 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:23Z","lastTransitionTime":"2026-02-17T08:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.761278 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.799150 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.846882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.847185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.847262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.847351 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.847430 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:23Z","lastTransitionTime":"2026-02-17T08:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.854290 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.879139 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.922465 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.951110 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.951174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.951193 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.951218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.951237 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:23Z","lastTransitionTime":"2026-02-17T08:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.968774 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:23 crc kubenswrapper[4813]: I0217 08:41:23.996209 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.042704 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:24Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.051626 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:37:29.93461568 +0000 UTC Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.053652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.053873 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.053988 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.054094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.054190 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:24Z","lastTransitionTime":"2026-02-17T08:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.057158 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.085636 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:24Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.110483 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:24 crc kubenswrapper[4813]: E0217 08:41:24.110796 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.115086 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:24Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.157163 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.157210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.157221 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.157236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.157245 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:24Z","lastTransitionTime":"2026-02-17T08:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.259833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.260089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.260186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.260304 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.260455 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:24Z","lastTransitionTime":"2026-02-17T08:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.363613 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.363960 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.363974 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.363992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.364004 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:24Z","lastTransitionTime":"2026-02-17T08:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.466724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.466754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.466763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.466776 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.466786 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:24Z","lastTransitionTime":"2026-02-17T08:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.569969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.570006 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.570014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.570027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.570037 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:24Z","lastTransitionTime":"2026-02-17T08:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.673024 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.673079 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.673093 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.673112 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.673125 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:24Z","lastTransitionTime":"2026-02-17T08:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.775380 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.775415 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.775424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.775438 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.775447 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:24Z","lastTransitionTime":"2026-02-17T08:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.878977 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.879027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.879045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.879071 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.879087 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:24Z","lastTransitionTime":"2026-02-17T08:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.982248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.982342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.982368 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.982399 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:24 crc kubenswrapper[4813]: I0217 08:41:24.982419 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:24Z","lastTransitionTime":"2026-02-17T08:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.052331 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:38:18.243061404 +0000 UTC Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.084160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.084262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.084282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.084304 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.084370 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:25Z","lastTransitionTime":"2026-02-17T08:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.110861 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:25 crc kubenswrapper[4813]: E0217 08:41:25.111233 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.111621 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:25 crc kubenswrapper[4813]: E0217 08:41:25.111971 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.187530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.187580 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.187603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.187631 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.187653 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:25Z","lastTransitionTime":"2026-02-17T08:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.290559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.290640 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.290663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.290693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.290713 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:25Z","lastTransitionTime":"2026-02-17T08:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.372994 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovnkube-controller/0.log" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.375453 4813 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.379968 4813 generic.go:334] "Generic (PLEG): container finished" podID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerID="1f0c2e9d8f6cd64979305df59205fb8aaa1be72d95890c1ec34303b7aa7481b3" exitCode=1 Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.380028 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerDied","Data":"1f0c2e9d8f6cd64979305df59205fb8aaa1be72d95890c1ec34303b7aa7481b3"} Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.381392 4813 scope.go:117] "RemoveContainer" containerID="1f0c2e9d8f6cd64979305df59205fb8aaa1be72d95890c1ec34303b7aa7481b3" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.395103 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.395762 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.395958 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.396136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.396359 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:25Z","lastTransitionTime":"2026-02-17T08:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.422086 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f0c2e9d8f6cd64979305df59205fb8aaa1be72d95890c1ec34303b7aa7481b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f0c2e9d8f6cd64979305df59205fb8aaa1be72d95890c1ec34303b7aa7481b3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:24Z\\\",\\\"message\\\":\\\" 6142 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 08:41:24.773631 6142 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 08:41:24.773791 6142 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 08:41:24.773857 6142 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 08:41:24.773892 6142 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 08:41:24.773898 6142 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 08:41:24.773954 6142 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 08:41:24.773984 6142 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 08:41:24.773992 6142 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 08:41:24.774025 6142 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 08:41:24.774043 6142 factory.go:656] Stopping watch factory\\\\nI0217 08:41:24.774055 6142 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 08:41:24.774063 6142 ovnkube.go:599] Stopped ovnkube\\\\nI0217 08:41:24.774062 6142 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 08:41:24.774088 6142 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:25Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.443642 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:25Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.458250 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:25Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.500060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.500097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.500111 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.500131 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.500146 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:25Z","lastTransitionTime":"2026-02-17T08:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.499954 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:25Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.517970 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:25Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.540555 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:25Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.552916 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:25Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.576569 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:25Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.588472 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:25Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.602070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.602105 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.602116 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.602132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.602143 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:25Z","lastTransitionTime":"2026-02-17T08:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.622835 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:25Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.636680 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:25Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.654543 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:25Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.670499 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:25Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.686757 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:25Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.696590 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:25Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.704830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.704878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.704895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.704916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.704930 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:25Z","lastTransitionTime":"2026-02-17T08:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.807227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.807254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.807263 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.807278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.807287 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:25Z","lastTransitionTime":"2026-02-17T08:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.919584 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.919615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.919623 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.919666 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:25 crc kubenswrapper[4813]: I0217 08:41:25.919677 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:25Z","lastTransitionTime":"2026-02-17T08:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.022933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.022982 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.022999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.023025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.023043 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:26Z","lastTransitionTime":"2026-02-17T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.053436 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:13:19.929693706 +0000 UTC Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.110849 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:26 crc kubenswrapper[4813]: E0217 08:41:26.111058 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.126078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.126142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.126165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.126194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.126217 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:26Z","lastTransitionTime":"2026-02-17T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.229921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.229987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.230008 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.230034 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.230053 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:26Z","lastTransitionTime":"2026-02-17T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.332961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.333391 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.333612 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.333774 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.333913 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:26Z","lastTransitionTime":"2026-02-17T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.385752 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovnkube-controller/0.log" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.389776 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerStarted","Data":"011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6"} Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.390557 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.413860 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.437067 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.437172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.437188 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.437229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.437242 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:26Z","lastTransitionTime":"2026-02-17T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.443244 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.467990 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.487970 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.507080 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.533503 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.540117 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.540178 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.540196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.540223 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.540241 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:26Z","lastTransitionTime":"2026-02-17T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.571224 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f0c2e9d8f6cd64979305df59205fb8aaa1be72d95890c1ec34303b7aa7481b3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:24Z\\\",\\\"message\\\":\\\" 6142 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 08:41:24.773631 6142 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 08:41:24.773791 6142 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 08:41:24.773857 6142 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 08:41:24.773892 6142 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 08:41:24.773898 6142 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 08:41:24.773954 6142 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 08:41:24.773984 6142 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 08:41:24.773992 6142 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 08:41:24.774025 6142 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 08:41:24.774043 6142 factory.go:656] Stopping watch factory\\\\nI0217 08:41:24.774055 6142 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 08:41:24.774063 6142 ovnkube.go:599] Stopped ovnkube\\\\nI0217 08:41:24.774062 6142 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 08:41:24.774088 6142 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.605056 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.619070 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.633592 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.642426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.642472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.642483 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.642497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.642508 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:26Z","lastTransitionTime":"2026-02-17T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.646742 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.659725 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.670381 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.687067 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.697118 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.745346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.745381 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.745389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.745403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.745412 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:26Z","lastTransitionTime":"2026-02-17T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.847990 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.848221 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.848303 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.848513 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.848621 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:26Z","lastTransitionTime":"2026-02-17T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.950819 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.951085 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.951142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.951211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:26 crc kubenswrapper[4813]: I0217 08:41:26.951271 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:26Z","lastTransitionTime":"2026-02-17T08:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.040454 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc"] Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.041442 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.044739 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.045178 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.054571 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.054862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.054538 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 01:04:21.591943455 +0000 UTC Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.055019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.055096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.055119 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:27Z","lastTransitionTime":"2026-02-17T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.065887 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.087052 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.104066 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.110812 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.110842 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:27 crc kubenswrapper[4813]: E0217 08:41:27.111021 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:27 crc kubenswrapper[4813]: E0217 08:41:27.111147 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.131617 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zr2l\" (UniqueName: \"kubernetes.io/projected/78e46e46-69da-4a35-ab3c-241f09064fe6-kube-api-access-9zr2l\") pod \"ovnkube-control-plane-749d76644c-lvvlc\" (UID: \"78e46e46-69da-4a35-ab3c-241f09064fe6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.131736 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/78e46e46-69da-4a35-ab3c-241f09064fe6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lvvlc\" (UID: \"78e46e46-69da-4a35-ab3c-241f09064fe6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.131859 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/78e46e46-69da-4a35-ab3c-241f09064fe6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lvvlc\" (UID: \"78e46e46-69da-4a35-ab3c-241f09064fe6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.131917 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/78e46e46-69da-4a35-ab3c-241f09064fe6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lvvlc\" (UID: \"78e46e46-69da-4a35-ab3c-241f09064fe6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.132054 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.158296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.158431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.158456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.158484 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.158505 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:27Z","lastTransitionTime":"2026-02-17T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.163831 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f0c2e9d8f6cd64979305df59205fb8aaa1be72d95890c1ec34303b7aa7481b3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:24Z\\\",\\\"message\\\":\\\" 6142 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 08:41:24.773631 6142 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 08:41:24.773791 6142 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 08:41:24.773857 6142 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 08:41:24.773892 6142 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 08:41:24.773898 6142 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 08:41:24.773954 6142 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 08:41:24.773984 6142 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 08:41:24.773992 6142 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 08:41:24.774025 6142 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 08:41:24.774043 6142 factory.go:656] Stopping watch factory\\\\nI0217 08:41:24.774055 6142 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 08:41:24.774063 6142 ovnkube.go:599] Stopped ovnkube\\\\nI0217 08:41:24.774062 6142 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 08:41:24.774088 6142 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.183887 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.197164 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.226985 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.233112 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/78e46e46-69da-4a35-ab3c-241f09064fe6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lvvlc\" (UID: \"78e46e46-69da-4a35-ab3c-241f09064fe6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.233183 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zr2l\" (UniqueName: \"kubernetes.io/projected/78e46e46-69da-4a35-ab3c-241f09064fe6-kube-api-access-9zr2l\") pod \"ovnkube-control-plane-749d76644c-lvvlc\" (UID: \"78e46e46-69da-4a35-ab3c-241f09064fe6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.233236 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/78e46e46-69da-4a35-ab3c-241f09064fe6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lvvlc\" (UID: \"78e46e46-69da-4a35-ab3c-241f09064fe6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.233363 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/78e46e46-69da-4a35-ab3c-241f09064fe6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lvvlc\" (UID: \"78e46e46-69da-4a35-ab3c-241f09064fe6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.234323 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/78e46e46-69da-4a35-ab3c-241f09064fe6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lvvlc\" (UID: \"78e46e46-69da-4a35-ab3c-241f09064fe6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.234439 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/78e46e46-69da-4a35-ab3c-241f09064fe6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lvvlc\" (UID: \"78e46e46-69da-4a35-ab3c-241f09064fe6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.242481 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/78e46e46-69da-4a35-ab3c-241f09064fe6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lvvlc\" (UID: \"78e46e46-69da-4a35-ab3c-241f09064fe6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.248294 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.260911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.260979 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.261003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.261034 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.261057 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:27Z","lastTransitionTime":"2026-02-17T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.261862 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zr2l\" (UniqueName: \"kubernetes.io/projected/78e46e46-69da-4a35-ab3c-241f09064fe6-kube-api-access-9zr2l\") pod \"ovnkube-control-plane-749d76644c-lvvlc\" (UID: \"78e46e46-69da-4a35-ab3c-241f09064fe6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.265005 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.287861 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.307737 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.328125 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.346098 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.364497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.364562 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.364583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.364607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.364625 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:27Z","lastTransitionTime":"2026-02-17T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.365151 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.366425 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.388480 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.397269 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovnkube-controller/1.log" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.398287 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovnkube-controller/0.log" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.405815 4813 generic.go:334] "Generic (PLEG): container finished" podID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerID="011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6" exitCode=1 Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.405895 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerDied","Data":"011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6"} Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.405975 4813 scope.go:117] "RemoveContainer" containerID="1f0c2e9d8f6cd64979305df59205fb8aaa1be72d95890c1ec34303b7aa7481b3" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.411072 4813 scope.go:117] "RemoveContainer" containerID="011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6" Feb 17 08:41:27 crc kubenswrapper[4813]: E0217 08:41:27.411425 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.429669 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.453069 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.468691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.468754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.468772 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.468801 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.468822 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:27Z","lastTransitionTime":"2026-02-17T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.476416 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f0c2e9d8f6cd64979305df59205fb8aaa1be72d95890c1ec34303b7aa7481b3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:24Z\\\",\\\"message\\\":\\\" 6142 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 08:41:24.773631 6142 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 08:41:24.773791 6142 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 08:41:24.773857 6142 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 08:41:24.773892 6142 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 08:41:24.773898 6142 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 08:41:24.773954 6142 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 08:41:24.773984 6142 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 08:41:24.773992 6142 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 08:41:24.774025 6142 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 08:41:24.774043 6142 factory.go:656] Stopping watch factory\\\\nI0217 08:41:24.774055 6142 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 08:41:24.774063 6142 ovnkube.go:599] Stopped ovnkube\\\\nI0217 08:41:24.774062 6142 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 08:41:24.774088 6142 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:26Z\\\",\\\"message\\\":\\\"eck-target-xd92c\\\\nI0217 08:41:26.774278 6283 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 08:41:26.774278 6283 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qdj4m\\\\nF0217 08:41:26.774277 6283 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z]\\\\nI0217 08:41:26.774285 6283 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0217 08:41:26.774293 6283 obj_retry.go:303] Retry object setup: *v1.Pod openshift-net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.524275 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.545762 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.571777 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.571826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.571839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.571860 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.571873 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:27Z","lastTransitionTime":"2026-02-17T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.582595 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.602010 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.620758 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.640851 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.656704 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.675584 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.675643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.675661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.675688 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.675709 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:27Z","lastTransitionTime":"2026-02-17T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.677447 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.696356 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.717072 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.736146 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.752910 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.771603 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:27Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.782294 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.782405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.782423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.782449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.782466 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:27Z","lastTransitionTime":"2026-02-17T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.886410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.886442 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.886453 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.886469 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.886482 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:27Z","lastTransitionTime":"2026-02-17T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.989707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.989756 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.989767 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.989786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:27 crc kubenswrapper[4813]: I0217 08:41:27.989798 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:27Z","lastTransitionTime":"2026-02-17T08:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.055752 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:05:33.67727203 +0000 UTC Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.092171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.092230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.092247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.092272 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.092293 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:28Z","lastTransitionTime":"2026-02-17T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.110480 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.110639 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.189754 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-srrq7"] Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.190252 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.190338 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.195662 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.195709 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.195721 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.195739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.195751 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:28Z","lastTransitionTime":"2026-02-17T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.208479 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.219794 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.239173 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.252062 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.274426 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.298368 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.298419 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.298436 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.298458 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.298472 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:28Z","lastTransitionTime":"2026-02-17T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.299911 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f0c2e9d8f6cd64979305df59205fb8aaa1be72d95890c1ec34303b7aa7481b3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:24Z\\\",\\\"message\\\":\\\" 6142 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 08:41:24.773631 6142 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 08:41:24.773791 6142 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 08:41:24.773857 6142 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 08:41:24.773892 6142 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 08:41:24.773898 6142 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 08:41:24.773954 6142 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 08:41:24.773984 6142 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 08:41:24.773992 6142 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 08:41:24.774025 6142 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 08:41:24.774043 6142 factory.go:656] Stopping watch factory\\\\nI0217 08:41:24.774055 6142 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 08:41:24.774063 6142 ovnkube.go:599] Stopped ovnkube\\\\nI0217 08:41:24.774062 6142 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 08:41:24.774088 6142 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:26Z\\\",\\\"message\\\":\\\"eck-target-xd92c\\\\nI0217 08:41:26.774278 6283 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 08:41:26.774278 6283 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qdj4m\\\\nF0217 08:41:26.774277 6283 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z]\\\\nI0217 08:41:26.774285 6283 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0217 08:41:26.774293 6283 obj_retry.go:303] Retry object setup: *v1.Pod openshift-net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.316799 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.339651 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.352784 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwdld\" (UniqueName: \"kubernetes.io/projected/b42b143b-e85b-44cc-a427-ba1ebd82c55b-kube-api-access-mwdld\") pod \"network-metrics-daemon-srrq7\" (UID: \"b42b143b-e85b-44cc-a427-ba1ebd82c55b\") " pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.352978 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs\") pod \"network-metrics-daemon-srrq7\" (UID: \"b42b143b-e85b-44cc-a427-ba1ebd82c55b\") " pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.356116 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.377088 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.397238 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.401889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.401945 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.401968 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.401997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.402014 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:28Z","lastTransitionTime":"2026-02-17T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.411629 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" event={"ID":"78e46e46-69da-4a35-ab3c-241f09064fe6","Type":"ContainerStarted","Data":"e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121"} Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.411764 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" event={"ID":"78e46e46-69da-4a35-ab3c-241f09064fe6","Type":"ContainerStarted","Data":"3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65"} Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.411786 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" event={"ID":"78e46e46-69da-4a35-ab3c-241f09064fe6","Type":"ContainerStarted","Data":"9472e2691d954ecf5978ccb9d541085d8b825da4cf51d3e9ca766332e51dc10a"} Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.413679 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.416123 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovnkube-controller/1.log" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.421282 4813 scope.go:117] "RemoveContainer" containerID="011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6" Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.421477 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.432231 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.450720 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.453731 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwdld\" (UniqueName: \"kubernetes.io/projected/b42b143b-e85b-44cc-a427-ba1ebd82c55b-kube-api-access-mwdld\") pod \"network-metrics-daemon-srrq7\" (UID: \"b42b143b-e85b-44cc-a427-ba1ebd82c55b\") " pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.453767 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs\") pod \"network-metrics-daemon-srrq7\" (UID: \"b42b143b-e85b-44cc-a427-ba1ebd82c55b\") " pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.453910 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.453967 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs podName:b42b143b-e85b-44cc-a427-ba1ebd82c55b nodeName:}" failed. No retries permitted until 2026-02-17 08:41:28.95395185 +0000 UTC m=+36.614713083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs") pod "network-metrics-daemon-srrq7" (UID: "b42b143b-e85b-44cc-a427-ba1ebd82c55b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.470470 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.485661 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwdld\" (UniqueName: \"kubernetes.io/projected/b42b143b-e85b-44cc-a427-ba1ebd82c55b-kube-api-access-mwdld\") pod \"network-metrics-daemon-srrq7\" (UID: \"b42b143b-e85b-44cc-a427-ba1ebd82c55b\") " pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.487625 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.505437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.505504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.505526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.505558 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.505585 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:28Z","lastTransitionTime":"2026-02-17T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.509271 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.525352 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.542579 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.557474 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.581479 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.603796 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:26Z\\\",\\\"message\\\":\\\"eck-target-xd92c\\\\nI0217 08:41:26.774278 6283 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 08:41:26.774278 6283 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qdj4m\\\\nF0217 08:41:26.774277 6283 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z]\\\\nI0217 08:41:26.774285 6283 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0217 08:41:26.774293 6283 obj_retry.go:303] Retry object setup: *v1.Pod openshift-net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.608753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.608812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.608831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.608856 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.608877 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:28Z","lastTransitionTime":"2026-02-17T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.626416 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.640886 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.681004 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.712231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.712299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.712359 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.712391 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.712413 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:28Z","lastTransitionTime":"2026-02-17T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.714743 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.734037 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.754021 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.774769 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.781690 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.781894 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:41:44.781852861 +0000 UTC m=+52.442614124 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.781987 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.782137 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.782164 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.782244 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:44.78221908 +0000 UTC m=+52.442980333 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.782368 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.782485 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:44.782456376 +0000 UTC m=+52.443217689 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.795963 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.814415 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.815484 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.815537 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.815554 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.815579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.815596 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:28Z","lastTransitionTime":"2026-02-17T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.832821 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.855098 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.873445 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.882975 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.883091 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.883212 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.883245 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.883265 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.883275 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.883336 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.883358 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.883377 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:44.883349077 +0000 UTC m=+52.544110330 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.883466 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 08:41:44.88344399 +0000 UTC m=+52.544205253 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.918245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.918362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.918386 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.918410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.918427 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:28Z","lastTransitionTime":"2026-02-17T08:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.918626 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.942159 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.961197 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.981148 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.984119 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs\") pod \"network-metrics-daemon-srrq7\" (UID: \"b42b143b-e85b-44cc-a427-ba1ebd82c55b\") " pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.984338 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 08:41:28 crc kubenswrapper[4813]: E0217 08:41:28.984434 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs podName:b42b143b-e85b-44cc-a427-ba1ebd82c55b nodeName:}" failed. No retries permitted until 2026-02-17 08:41:29.984410483 +0000 UTC m=+37.645171746 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs") pod "network-metrics-daemon-srrq7" (UID: "b42b143b-e85b-44cc-a427-ba1ebd82c55b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 08:41:28 crc kubenswrapper[4813]: I0217 08:41:28.999235 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:28Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.015846 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.021046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.021132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.021161 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.021197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.021222 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:29Z","lastTransitionTime":"2026-02-17T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.036996 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.056796 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:18:38.193818845 +0000 UTC Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.059863 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.081970 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.111030 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.111109 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.111047 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:26Z\\\",\\\"message\\\":\\\"eck-target-xd92c\\\\nI0217 08:41:26.774278 6283 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 08:41:26.774278 6283 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qdj4m\\\\nF0217 08:41:26.774277 6283 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z]\\\\nI0217 08:41:26.774285 6283 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0217 08:41:26.774293 6283 obj_retry.go:303] Retry object setup: *v1.Pod openshift-net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: E0217 08:41:29.111200 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:29 crc kubenswrapper[4813]: E0217 08:41:29.111297 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.124397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.124445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.124463 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.124485 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.124502 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:29Z","lastTransitionTime":"2026-02-17T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.130667 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.144825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.144877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.144900 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.144928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.144947 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:29Z","lastTransitionTime":"2026-02-17T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.154824 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: E0217 08:41:29.165435 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.170526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.170590 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.170612 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.170642 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.170665 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:29Z","lastTransitionTime":"2026-02-17T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:29 crc kubenswrapper[4813]: E0217 08:41:29.191632 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.191985 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.197185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.197240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.197260 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.197285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.197303 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:29Z","lastTransitionTime":"2026-02-17T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.210353 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: E0217 08:41:29.220066 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.224542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.224598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.224617 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.224642 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.224659 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:29Z","lastTransitionTime":"2026-02-17T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.233341 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: E0217 08:41:29.243925 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.250808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.251011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.251208 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.251477 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.251676 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:29Z","lastTransitionTime":"2026-02-17T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.255529 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: E0217 08:41:29.272011 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: E0217 08:41:29.272375 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.272451 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.274941 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.275016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.275046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.275078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.275105 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:29Z","lastTransitionTime":"2026-02-17T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.294607 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:29Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.378358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.378446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.378472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.378504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.378526 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:29Z","lastTransitionTime":"2026-02-17T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.481743 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.481807 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.481826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.481850 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.481868 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:29Z","lastTransitionTime":"2026-02-17T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.584483 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.584540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.584553 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.584569 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.584578 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:29Z","lastTransitionTime":"2026-02-17T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.687947 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.688019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.688042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.688076 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.688100 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:29Z","lastTransitionTime":"2026-02-17T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.791423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.791795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.791807 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.791827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.791839 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:29Z","lastTransitionTime":"2026-02-17T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.894463 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.894518 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.894535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.894559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.894576 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:29Z","lastTransitionTime":"2026-02-17T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:29 crc kubenswrapper[4813]: E0217 08:41:29.996504 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 08:41:29 crc kubenswrapper[4813]: E0217 08:41:29.996650 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs podName:b42b143b-e85b-44cc-a427-ba1ebd82c55b nodeName:}" failed. No retries permitted until 2026-02-17 08:41:31.996611858 +0000 UTC m=+39.657373121 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs") pod "network-metrics-daemon-srrq7" (UID: "b42b143b-e85b-44cc-a427-ba1ebd82c55b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.996299 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs\") pod \"network-metrics-daemon-srrq7\" (UID: \"b42b143b-e85b-44cc-a427-ba1ebd82c55b\") " pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.998170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.998231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.998249 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.998273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:29 crc kubenswrapper[4813]: I0217 08:41:29.998332 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:29Z","lastTransitionTime":"2026-02-17T08:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.057582 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:56:52.182606592 +0000 UTC Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.101042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.101091 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.101108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.101132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.101150 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:30Z","lastTransitionTime":"2026-02-17T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.111030 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:30 crc kubenswrapper[4813]: E0217 08:41:30.111175 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.111516 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:30 crc kubenswrapper[4813]: E0217 08:41:30.111705 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.204039 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.204102 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.204122 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.204154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.204176 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:30Z","lastTransitionTime":"2026-02-17T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.307424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.307489 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.307508 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.307538 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.307558 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:30Z","lastTransitionTime":"2026-02-17T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.410492 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.410572 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.410595 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.410625 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.410653 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:30Z","lastTransitionTime":"2026-02-17T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.513522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.513586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.513612 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.513659 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.513680 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:30Z","lastTransitionTime":"2026-02-17T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.621229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.621298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.621358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.621385 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.621403 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:30Z","lastTransitionTime":"2026-02-17T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.724405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.724473 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.724490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.724516 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.724536 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:30Z","lastTransitionTime":"2026-02-17T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.827560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.827628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.827646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.827669 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.827689 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:30Z","lastTransitionTime":"2026-02-17T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.931051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.931109 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.931148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.931169 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:30 crc kubenswrapper[4813]: I0217 08:41:30.931185 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:30Z","lastTransitionTime":"2026-02-17T08:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.034352 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.034427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.034444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.034468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.034486 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:31Z","lastTransitionTime":"2026-02-17T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.057820 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:55:22.536286677 +0000 UTC Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.110444 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:31 crc kubenswrapper[4813]: E0217 08:41:31.110609 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.110815 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:31 crc kubenswrapper[4813]: E0217 08:41:31.111035 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.136816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.136877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.136899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.136922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.136940 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:31Z","lastTransitionTime":"2026-02-17T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.239941 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.240032 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.240052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.240077 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.240094 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:31Z","lastTransitionTime":"2026-02-17T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.342642 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.342738 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.342756 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.342786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.342806 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:31Z","lastTransitionTime":"2026-02-17T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.445492 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.445570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.445601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.445629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.445649 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:31Z","lastTransitionTime":"2026-02-17T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.549001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.549051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.549067 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.549090 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.549106 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:31Z","lastTransitionTime":"2026-02-17T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.651141 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.651191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.651205 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.651226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.651240 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:31Z","lastTransitionTime":"2026-02-17T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.754404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.754471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.754495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.754525 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.754546 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:31Z","lastTransitionTime":"2026-02-17T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.857430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.857498 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.857520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.857550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.857569 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:31Z","lastTransitionTime":"2026-02-17T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.960418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.960502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.960521 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.960548 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:31 crc kubenswrapper[4813]: I0217 08:41:31.960566 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:31Z","lastTransitionTime":"2026-02-17T08:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.021806 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs\") pod \"network-metrics-daemon-srrq7\" (UID: \"b42b143b-e85b-44cc-a427-ba1ebd82c55b\") " pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:32 crc kubenswrapper[4813]: E0217 08:41:32.022029 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 08:41:32 crc kubenswrapper[4813]: E0217 08:41:32.022149 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs podName:b42b143b-e85b-44cc-a427-ba1ebd82c55b nodeName:}" failed. No retries permitted until 2026-02-17 08:41:36.022118458 +0000 UTC m=+43.682879711 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs") pod "network-metrics-daemon-srrq7" (UID: "b42b143b-e85b-44cc-a427-ba1ebd82c55b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.058016 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:17:10.098158081 +0000 UTC Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.064089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.064142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.064153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.064173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.064186 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:32Z","lastTransitionTime":"2026-02-17T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.110642 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.110642 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:32 crc kubenswrapper[4813]: E0217 08:41:32.110864 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:41:32 crc kubenswrapper[4813]: E0217 08:41:32.110990 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.167781 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.168377 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.168406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.168440 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.168464 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:32Z","lastTransitionTime":"2026-02-17T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.271153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.271227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.271251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.271284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.271337 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:32Z","lastTransitionTime":"2026-02-17T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.375161 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.375229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.375253 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.375283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.375334 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:32Z","lastTransitionTime":"2026-02-17T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.478929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.478988 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.479002 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.479023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.479038 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:32Z","lastTransitionTime":"2026-02-17T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.582082 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.582168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.582189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.582621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.582851 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:32Z","lastTransitionTime":"2026-02-17T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.686377 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.686427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.686445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.686471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.686492 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:32Z","lastTransitionTime":"2026-02-17T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.789895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.789985 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.790007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.790038 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.790061 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:32Z","lastTransitionTime":"2026-02-17T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.893812 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.893882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.893902 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.893930 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.893950 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:32Z","lastTransitionTime":"2026-02-17T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.997591 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.997646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.997664 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.997688 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:32 crc kubenswrapper[4813]: I0217 08:41:32.997705 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:32Z","lastTransitionTime":"2026-02-17T08:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.058232 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:31:17.395657194 +0000 UTC Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.100847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.100920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.100940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.100967 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.100987 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:33Z","lastTransitionTime":"2026-02-17T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.110538 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:33 crc kubenswrapper[4813]: E0217 08:41:33.110700 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.110780 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:33 crc kubenswrapper[4813]: E0217 08:41:33.110974 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.132573 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.154429 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.177782 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.200055 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.205456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.205531 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.205562 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.205599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.205631 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:33Z","lastTransitionTime":"2026-02-17T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.224217 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.245333 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.268915 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.285999 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.304100 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.310298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.310423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.310447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.310475 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.310504 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:33Z","lastTransitionTime":"2026-02-17T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.339763 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:26Z\\\",\\\"message\\\":\\\"eck-target-xd92c\\\\nI0217 08:41:26.774278 6283 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 08:41:26.774278 6283 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qdj4m\\\\nF0217 08:41:26.774277 6283 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z]\\\\nI0217 08:41:26.774285 6283 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0217 08:41:26.774293 6283 obj_retry.go:303] Retry object setup: *v1.Pod openshift-net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.361364 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.393517 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.407136 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.414162 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.414203 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.414215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.414234 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.414248 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:33Z","lastTransitionTime":"2026-02-17T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.428801 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.445792 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.462338 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.474252 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:33Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.517644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.517766 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.517783 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.517810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.517827 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:33Z","lastTransitionTime":"2026-02-17T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.620750 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.620820 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.620840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.620865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.620882 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:33Z","lastTransitionTime":"2026-02-17T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.724022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.724100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.724119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.724146 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.724164 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:33Z","lastTransitionTime":"2026-02-17T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.827566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.827620 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.827638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.827662 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.827680 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:33Z","lastTransitionTime":"2026-02-17T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.931171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.931245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.931270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.931350 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:33 crc kubenswrapper[4813]: I0217 08:41:33.931390 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:33Z","lastTransitionTime":"2026-02-17T08:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.034713 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.034810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.034832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.034865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.034889 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:34Z","lastTransitionTime":"2026-02-17T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.059340 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:29:56.197245285 +0000 UTC Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.111267 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:34 crc kubenswrapper[4813]: E0217 08:41:34.111511 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.111588 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:34 crc kubenswrapper[4813]: E0217 08:41:34.111776 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.137922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.137989 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.138012 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.138043 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.138065 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:34Z","lastTransitionTime":"2026-02-17T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.241271 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.241382 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.241409 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.241435 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.241453 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:34Z","lastTransitionTime":"2026-02-17T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.343824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.343877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.343899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.343925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.343949 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:34Z","lastTransitionTime":"2026-02-17T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.446478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.446551 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.446576 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.446607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.446631 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:34Z","lastTransitionTime":"2026-02-17T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.549916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.549974 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.549992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.550019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.550040 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:34Z","lastTransitionTime":"2026-02-17T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.652975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.653035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.653053 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.653077 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.653095 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:34Z","lastTransitionTime":"2026-02-17T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.756388 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.756491 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.756515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.756544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.756564 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:34Z","lastTransitionTime":"2026-02-17T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.859216 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.859273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.859292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.859343 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.859360 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:34Z","lastTransitionTime":"2026-02-17T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.962341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.962410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.962429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.962452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:34 crc kubenswrapper[4813]: I0217 08:41:34.962469 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:34Z","lastTransitionTime":"2026-02-17T08:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.059837 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:54:28.482218324 +0000 UTC Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.065216 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.065265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.065283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.065341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.065358 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:35Z","lastTransitionTime":"2026-02-17T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.110553 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.110558 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:35 crc kubenswrapper[4813]: E0217 08:41:35.110776 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:35 crc kubenswrapper[4813]: E0217 08:41:35.110952 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.168740 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.168829 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.168851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.168881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.168902 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:35Z","lastTransitionTime":"2026-02-17T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.272267 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.272375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.272401 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.272429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.272448 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:35Z","lastTransitionTime":"2026-02-17T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.376026 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.376082 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.376099 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.376125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.376146 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:35Z","lastTransitionTime":"2026-02-17T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.479090 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.479157 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.479185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.479214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.479238 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:35Z","lastTransitionTime":"2026-02-17T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.583700 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.583763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.583781 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.583810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.583835 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:35Z","lastTransitionTime":"2026-02-17T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.687352 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.687971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.688134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.688370 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.688520 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:35Z","lastTransitionTime":"2026-02-17T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.791106 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.791617 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.791848 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.792055 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.792204 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:35Z","lastTransitionTime":"2026-02-17T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.896088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.896151 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.896170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.896197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.896216 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:35Z","lastTransitionTime":"2026-02-17T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.999439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.999535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.999557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.999902 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:35 crc kubenswrapper[4813]: I0217 08:41:35.999927 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:35Z","lastTransitionTime":"2026-02-17T08:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.060207 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 17:02:53.422959482 +0000 UTC Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.068044 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs\") pod \"network-metrics-daemon-srrq7\" (UID: \"b42b143b-e85b-44cc-a427-ba1ebd82c55b\") " pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:36 crc kubenswrapper[4813]: E0217 08:41:36.068263 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 08:41:36 crc kubenswrapper[4813]: E0217 08:41:36.068429 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs podName:b42b143b-e85b-44cc-a427-ba1ebd82c55b nodeName:}" failed. No retries permitted until 2026-02-17 08:41:44.068395 +0000 UTC m=+51.729156273 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs") pod "network-metrics-daemon-srrq7" (UID: "b42b143b-e85b-44cc-a427-ba1ebd82c55b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.102512 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.102560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.102582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.102611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.102635 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:36Z","lastTransitionTime":"2026-02-17T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.110943 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.110958 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:36 crc kubenswrapper[4813]: E0217 08:41:36.111132 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:36 crc kubenswrapper[4813]: E0217 08:41:36.111223 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.206027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.206091 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.206110 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.206136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.206155 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:36Z","lastTransitionTime":"2026-02-17T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.309865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.309932 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.309949 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.309975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.309993 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:36Z","lastTransitionTime":"2026-02-17T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.413267 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.413367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.413385 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.413419 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.413437 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:36Z","lastTransitionTime":"2026-02-17T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.516118 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.516201 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.516220 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.516246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.516263 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:36Z","lastTransitionTime":"2026-02-17T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.619822 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.619884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.619904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.619935 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.619953 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:36Z","lastTransitionTime":"2026-02-17T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.722711 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.722761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.722778 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.722800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.722817 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:36Z","lastTransitionTime":"2026-02-17T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.825972 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.826044 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.826067 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.826096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.826117 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:36Z","lastTransitionTime":"2026-02-17T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.929908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.929968 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.929985 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.930009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:36 crc kubenswrapper[4813]: I0217 08:41:36.930030 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:36Z","lastTransitionTime":"2026-02-17T08:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.033563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.033655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.033679 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.033713 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.033736 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:37Z","lastTransitionTime":"2026-02-17T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.060548 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 05:11:09.409504035 +0000 UTC Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.110772 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.110847 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:37 crc kubenswrapper[4813]: E0217 08:41:37.111007 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:37 crc kubenswrapper[4813]: E0217 08:41:37.111159 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.136481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.136807 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.137039 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.137269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.137480 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:37Z","lastTransitionTime":"2026-02-17T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.241040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.241104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.241122 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.241149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.241167 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:37Z","lastTransitionTime":"2026-02-17T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.344207 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.344262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.344278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.344334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.344354 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:37Z","lastTransitionTime":"2026-02-17T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.447699 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.447768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.447789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.447813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.447831 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:37Z","lastTransitionTime":"2026-02-17T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.550268 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.550348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.550366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.550392 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.550409 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:37Z","lastTransitionTime":"2026-02-17T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.652815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.652888 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.652910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.652942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.652969 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:37Z","lastTransitionTime":"2026-02-17T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.756616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.756836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.756866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.756961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.757028 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:37Z","lastTransitionTime":"2026-02-17T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.860470 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.860546 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.860570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.860603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.860626 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:37Z","lastTransitionTime":"2026-02-17T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.964184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.964247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.964271 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.964302 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:37 crc kubenswrapper[4813]: I0217 08:41:37.964360 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:37Z","lastTransitionTime":"2026-02-17T08:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.061600 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:15:50.622696659 +0000 UTC Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.067167 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.067225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.067249 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.067279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.067300 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:38Z","lastTransitionTime":"2026-02-17T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.110447 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.110447 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:38 crc kubenswrapper[4813]: E0217 08:41:38.110646 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:41:38 crc kubenswrapper[4813]: E0217 08:41:38.110742 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.171736 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.171835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.171859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.171891 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.171911 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:38Z","lastTransitionTime":"2026-02-17T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.274707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.275030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.275215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.275418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.275583 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:38Z","lastTransitionTime":"2026-02-17T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.379285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.379381 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.379399 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.379425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.379441 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:38Z","lastTransitionTime":"2026-02-17T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.482547 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.482592 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.482604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.482622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.482635 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:38Z","lastTransitionTime":"2026-02-17T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.585514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.585593 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.585619 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.585650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.585675 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:38Z","lastTransitionTime":"2026-02-17T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.688218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.688273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.688290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.688354 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.688379 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:38Z","lastTransitionTime":"2026-02-17T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.791658 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.791731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.791753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.791786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.791813 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:38Z","lastTransitionTime":"2026-02-17T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.895834 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.895991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.896016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.896044 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.896066 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:38Z","lastTransitionTime":"2026-02-17T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.999506 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.999606 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.999670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.999735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:38 crc kubenswrapper[4813]: I0217 08:41:38.999754 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:38Z","lastTransitionTime":"2026-02-17T08:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.062718 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:55:28.438790676 +0000 UTC Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.102154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.102214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.102238 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.102468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.102493 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:39Z","lastTransitionTime":"2026-02-17T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.111064 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.111128 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:39 crc kubenswrapper[4813]: E0217 08:41:39.111214 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:39 crc kubenswrapper[4813]: E0217 08:41:39.112139 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.113193 4813 scope.go:117] "RemoveContainer" containerID="011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.207010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.207085 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.207111 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.207145 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.207170 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:39Z","lastTransitionTime":"2026-02-17T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.309429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.309450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.309458 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.309471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.309480 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:39Z","lastTransitionTime":"2026-02-17T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.405475 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.405509 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.405517 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.405530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.405541 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:39Z","lastTransitionTime":"2026-02-17T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:39 crc kubenswrapper[4813]: E0217 08:41:39.423238 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.428441 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.428470 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.428481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.428494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.428502 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:39Z","lastTransitionTime":"2026-02-17T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:39 crc kubenswrapper[4813]: E0217 08:41:39.447326 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.451964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.451999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.452011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.452031 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.452044 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:39Z","lastTransitionTime":"2026-02-17T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.465518 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovnkube-controller/1.log" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.469230 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerStarted","Data":"d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f"} Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.471056 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:41:39 crc kubenswrapper[4813]: E0217 08:41:39.475425 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.489240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.489381 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.489400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.489426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.489443 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:39Z","lastTransitionTime":"2026-02-17T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.491376 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: E0217 08:41:39.507945 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.513698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.513714 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.513767 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.513940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.513999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.514046 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:39Z","lastTransitionTime":"2026-02-17T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.530592 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: E0217 08:41:39.534003 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: E0217 08:41:39.534238 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.536647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.536702 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.536723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.536747 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.536765 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:39Z","lastTransitionTime":"2026-02-17T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.551569 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.580887 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.595778 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.623418 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.639361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.639415 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.639428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.639452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.639467 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:39Z","lastTransitionTime":"2026-02-17T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.645151 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.665696 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.684053 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.698023 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.711255 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.731056 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:26Z\\\",\\\"message\\\":\\\"eck-target-xd92c\\\\nI0217 08:41:26.774278 6283 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 08:41:26.774278 6283 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qdj4m\\\\nF0217 08:41:26.774277 6283 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z]\\\\nI0217 08:41:26.774285 6283 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0217 08:41:26.774293 6283 obj_retry.go:303] Retry object setup: *v1.Pod openshift-net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.741720 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.741766 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.741781 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.741800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.741813 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:39Z","lastTransitionTime":"2026-02-17T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.743839 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.763409 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.776387 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.789718 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:39Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.844318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.844360 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.844369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.844385 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.844394 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:39Z","lastTransitionTime":"2026-02-17T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.947911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.947986 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.948027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.948056 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:39 crc kubenswrapper[4813]: I0217 08:41:39.948075 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:39Z","lastTransitionTime":"2026-02-17T08:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.051413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.051504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.051530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.051564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.051586 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:40Z","lastTransitionTime":"2026-02-17T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.063868 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:44:49.079141276 +0000 UTC Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.110994 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.111080 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:40 crc kubenswrapper[4813]: E0217 08:41:40.111218 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:41:40 crc kubenswrapper[4813]: E0217 08:41:40.111395 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.155185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.155246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.155264 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.155291 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.155386 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:40Z","lastTransitionTime":"2026-02-17T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.258827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.258926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.258944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.258973 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.258993 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:40Z","lastTransitionTime":"2026-02-17T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.362377 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.362445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.362468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.362500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.362521 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:40Z","lastTransitionTime":"2026-02-17T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.466116 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.466201 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.466227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.466261 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.466285 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:40Z","lastTransitionTime":"2026-02-17T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.475535 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovnkube-controller/2.log" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.476735 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovnkube-controller/1.log" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.482374 4813 generic.go:334] "Generic (PLEG): container finished" podID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerID="d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f" exitCode=1 Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.482457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerDied","Data":"d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f"} Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.482535 4813 scope.go:117] "RemoveContainer" containerID="011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.483559 4813 scope.go:117] "RemoveContainer" containerID="d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f" Feb 17 08:41:40 crc kubenswrapper[4813]: E0217 08:41:40.483848 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.505255 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.524213 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.542944 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.565806 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.569791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.569846 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.569863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.569887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.569904 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:40Z","lastTransitionTime":"2026-02-17T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.583208 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.605701 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.636735 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://011cf35bf3ac90ceacec5307709e328e6a05f5569ee9e66d16b289897fd2eee6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:26Z\\\",\\\"message\\\":\\\"eck-target-xd92c\\\\nI0217 08:41:26.774278 6283 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 08:41:26.774278 6283 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-qdj4m\\\\nF0217 08:41:26.774277 6283 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:26Z is after 2025-08-24T17:21:41Z]\\\\nI0217 08:41:26.774285 6283 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0217 08:41:26.774293 6283 obj_retry.go:303] Retry object setup: *v1.Pod openshift-net\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:40Z\\\",\\\"message\\\":\\\"aiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 08:41:40.115817 6477 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-srrq7 before timer (time: 2026-02-17 08:41:41.255882976 +0000 UTC m=+1.828758339): skip\\\\nI0217 08:41:40.115924 6477 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 149.213µs)\\\\nI0217 08:41:40.115784 6477 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 08:41:40.115816 6477 ovnkube.go:599] Stopped ovnkube\\\\nI0217 08:41:40.116076 6477 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 08:41:40.116179 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.657443 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.673659 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.673723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.673745 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.673787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.673811 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:40Z","lastTransitionTime":"2026-02-17T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.675528 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.693494 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.725771 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.745705 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.767150 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.777241 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.777352 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.777372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.777399 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.777416 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:40Z","lastTransitionTime":"2026-02-17T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.789427 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.811243 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.831958 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.852193 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:40Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.880551 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.880607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.880624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.880647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.881463 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:40Z","lastTransitionTime":"2026-02-17T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.984695 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.984769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.984790 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.984819 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:40 crc kubenswrapper[4813]: I0217 08:41:40.984838 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:40Z","lastTransitionTime":"2026-02-17T08:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.064944 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:31:52.273732781 +0000 UTC Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.087140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.087191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.087209 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.087233 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.087250 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:41Z","lastTransitionTime":"2026-02-17T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.110782 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:41 crc kubenswrapper[4813]: E0217 08:41:41.110984 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.111116 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:41 crc kubenswrapper[4813]: E0217 08:41:41.111279 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.190627 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.190689 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.190706 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.190731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.190749 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:41Z","lastTransitionTime":"2026-02-17T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.293691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.293757 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.293774 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.293799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.293816 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:41Z","lastTransitionTime":"2026-02-17T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.397355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.397426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.397450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.397482 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.397506 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:41Z","lastTransitionTime":"2026-02-17T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.492944 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovnkube-controller/2.log" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.498187 4813 scope.go:117] "RemoveContainer" containerID="d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f" Feb 17 08:41:41 crc kubenswrapper[4813]: E0217 08:41:41.498531 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.499165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.499439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.499591 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.499770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.499941 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:41Z","lastTransitionTime":"2026-02-17T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.519868 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.537965 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.559690 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.578666 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.604203 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.604768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.604826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.604849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.604878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.604899 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:41Z","lastTransitionTime":"2026-02-17T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.633266 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:40Z\\\",\\\"message\\\":\\\"aiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 08:41:40.115817 6477 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-srrq7 before timer (time: 2026-02-17 08:41:41.255882976 +0000 UTC m=+1.828758339): skip\\\\nI0217 08:41:40.115924 6477 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 149.213µs)\\\\nI0217 08:41:40.115784 6477 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 08:41:40.115816 6477 ovnkube.go:599] Stopped ovnkube\\\\nI0217 08:41:40.116076 6477 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 08:41:40.116179 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.651383 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.668518 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.706678 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.708161 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.708225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.708246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.708277 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.708297 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:41Z","lastTransitionTime":"2026-02-17T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.726425 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.746737 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.768413 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.787529 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.808959 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.811663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.811724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.811741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.811772 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.811792 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:41Z","lastTransitionTime":"2026-02-17T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.829209 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.849031 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.871839 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:41Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.915535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.915613 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.915634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.915663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:41 crc kubenswrapper[4813]: I0217 08:41:41.915682 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:41Z","lastTransitionTime":"2026-02-17T08:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.018925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.019012 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.019035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.019065 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.019087 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:42Z","lastTransitionTime":"2026-02-17T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.065620 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 00:19:25.229350303 +0000 UTC Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.110215 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.110264 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:42 crc kubenswrapper[4813]: E0217 08:41:42.110428 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:42 crc kubenswrapper[4813]: E0217 08:41:42.110585 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.123783 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.123857 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.123876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.123901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.123920 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:42Z","lastTransitionTime":"2026-02-17T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.227358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.227422 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.227439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.227464 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.227480 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:42Z","lastTransitionTime":"2026-02-17T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.330923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.330990 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.331006 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.331031 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.331049 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:42Z","lastTransitionTime":"2026-02-17T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.434014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.434084 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.434100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.434133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.434151 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:42Z","lastTransitionTime":"2026-02-17T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.537166 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.537245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.537264 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.537288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.537342 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:42Z","lastTransitionTime":"2026-02-17T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.640565 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.640636 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.640658 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.640688 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.640710 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:42Z","lastTransitionTime":"2026-02-17T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.743506 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.743571 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.743605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.743646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.743672 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:42Z","lastTransitionTime":"2026-02-17T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.851233 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.851354 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.851376 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.851453 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.851474 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:42Z","lastTransitionTime":"2026-02-17T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.954230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.954272 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.954284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.954300 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:42 crc kubenswrapper[4813]: I0217 08:41:42.954332 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:42Z","lastTransitionTime":"2026-02-17T08:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.057890 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.057965 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.058000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.058034 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.058057 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:43Z","lastTransitionTime":"2026-02-17T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.066300 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:45:51.8141648 +0000 UTC Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.110728 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.110867 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:43 crc kubenswrapper[4813]: E0217 08:41:43.110963 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:43 crc kubenswrapper[4813]: E0217 08:41:43.111103 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.127845 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.141602 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.161295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.161400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.161425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.161461 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.161486 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:43Z","lastTransitionTime":"2026-02-17T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.174986 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.192428 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.210498 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.241227 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.259695 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.263606 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.263645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.263663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.263685 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.263700 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:43Z","lastTransitionTime":"2026-02-17T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.278476 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.295122 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.311414 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.329287 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.342968 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.362008 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.365829 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.365889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.365908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.365933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.365953 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:43Z","lastTransitionTime":"2026-02-17T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.375438 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.397719 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.422944 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:40Z\\\",\\\"message\\\":\\\"aiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 08:41:40.115817 6477 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-srrq7 before timer (time: 2026-02-17 08:41:41.255882976 +0000 UTC m=+1.828758339): skip\\\\nI0217 08:41:40.115924 6477 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 149.213µs)\\\\nI0217 08:41:40.115784 6477 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 08:41:40.115816 6477 ovnkube.go:599] Stopped ovnkube\\\\nI0217 08:41:40.116076 6477 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 08:41:40.116179 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.437500 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:43Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.468661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.468713 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.468730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.468753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.468771 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:43Z","lastTransitionTime":"2026-02-17T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.571228 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.571287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.571304 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.571383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.571405 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:43Z","lastTransitionTime":"2026-02-17T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.674611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.674683 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.674702 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.674728 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.674748 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:43Z","lastTransitionTime":"2026-02-17T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.777363 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.777414 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.777425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.777439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.777449 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:43Z","lastTransitionTime":"2026-02-17T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.879923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.879985 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.880003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.880029 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.880046 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:43Z","lastTransitionTime":"2026-02-17T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.982697 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.982760 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.982779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.982806 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:43 crc kubenswrapper[4813]: I0217 08:41:43.982828 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:43Z","lastTransitionTime":"2026-02-17T08:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.067231 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 23:37:35.912023349 +0000 UTC Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.085990 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.086049 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.086065 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.086093 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.086119 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:44Z","lastTransitionTime":"2026-02-17T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.110431 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.110681 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.110818 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.110946 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.162099 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs\") pod \"network-metrics-daemon-srrq7\" (UID: \"b42b143b-e85b-44cc-a427-ba1ebd82c55b\") " pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.162356 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.162450 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs podName:b42b143b-e85b-44cc-a427-ba1ebd82c55b nodeName:}" failed. No retries permitted until 2026-02-17 08:42:00.162425783 +0000 UTC m=+67.823187046 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs") pod "network-metrics-daemon-srrq7" (UID: "b42b143b-e85b-44cc-a427-ba1ebd82c55b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.189453 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.189513 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.189532 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.189557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.189577 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:44Z","lastTransitionTime":"2026-02-17T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.292933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.292990 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.293006 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.293029 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.293047 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:44Z","lastTransitionTime":"2026-02-17T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.397363 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.397427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.397444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.397469 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.397486 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:44Z","lastTransitionTime":"2026-02-17T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.500703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.500769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.500787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.500813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.500832 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:44Z","lastTransitionTime":"2026-02-17T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.604530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.604585 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.604602 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.604627 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.604644 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:44Z","lastTransitionTime":"2026-02-17T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.708599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.708662 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.708676 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.708699 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.708711 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:44Z","lastTransitionTime":"2026-02-17T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.812378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.812456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.812480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.812517 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.812539 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:44Z","lastTransitionTime":"2026-02-17T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.869070 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.869340 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:42:16.869264749 +0000 UTC m=+84.530026002 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.869424 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.869598 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.869717 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.869813 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.869895 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 08:42:16.869828513 +0000 UTC m=+84.530589776 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.869943 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 08:42:16.869921495 +0000 UTC m=+84.530682828 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.916198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.916260 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.916280 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.916334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.916351 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:44Z","lastTransitionTime":"2026-02-17T08:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.970282 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:44 crc kubenswrapper[4813]: I0217 08:41:44.970404 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.970582 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.970610 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.970629 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.970697 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 08:42:16.970675063 +0000 UTC m=+84.631436316 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.971172 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.971296 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.971373 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:44 crc kubenswrapper[4813]: E0217 08:41:44.971511 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 08:42:16.971474153 +0000 UTC m=+84.632235436 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.019389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.019456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.019474 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.019502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.019521 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:45Z","lastTransitionTime":"2026-02-17T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.068235 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 13:40:54.785686777 +0000 UTC Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.075663 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.090658 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.099185 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.110743 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.110793 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:45 crc kubenswrapper[4813]: E0217 08:41:45.110913 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:45 crc kubenswrapper[4813]: E0217 08:41:45.111088 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.115103 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.122880 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.122975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.123040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.123070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.123135 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:45Z","lastTransitionTime":"2026-02-17T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.132702 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.166538 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.189926 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.210839 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.227636 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.227698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.227719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.227743 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.227764 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:45Z","lastTransitionTime":"2026-02-17T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.234275 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.259976 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.276766 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.293370 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.306831 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.320584 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.331469 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.331537 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.331560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.331594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.331617 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:45Z","lastTransitionTime":"2026-02-17T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.334512 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.356005 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.371230 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.392521 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.421995 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:40Z\\\",\\\"message\\\":\\\"aiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 08:41:40.115817 6477 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-srrq7 before timer (time: 2026-02-17 08:41:41.255882976 +0000 UTC m=+1.828758339): skip\\\\nI0217 08:41:40.115924 6477 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 149.213µs)\\\\nI0217 08:41:40.115784 6477 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 08:41:40.115816 6477 ovnkube.go:599] Stopped ovnkube\\\\nI0217 08:41:40.116076 6477 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 08:41:40.116179 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:45Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.434006 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.434101 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.434119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.434146 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.434163 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:45Z","lastTransitionTime":"2026-02-17T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.537189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.537251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.537268 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.537294 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.537339 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:45Z","lastTransitionTime":"2026-02-17T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.640549 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.640616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.640635 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.640661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.640679 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:45Z","lastTransitionTime":"2026-02-17T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.743819 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.743886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.743910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.743940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.743958 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:45Z","lastTransitionTime":"2026-02-17T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.847287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.847403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.847460 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.847485 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.847536 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:45Z","lastTransitionTime":"2026-02-17T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.950422 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.950477 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.950495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.950521 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:45 crc kubenswrapper[4813]: I0217 08:41:45.950540 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:45Z","lastTransitionTime":"2026-02-17T08:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.054040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.054122 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.054142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.054167 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.054183 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:46Z","lastTransitionTime":"2026-02-17T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.068388 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 14:38:14.601631278 +0000 UTC Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.110194 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.110213 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:46 crc kubenswrapper[4813]: E0217 08:41:46.110495 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:41:46 crc kubenswrapper[4813]: E0217 08:41:46.110689 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.157942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.158001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.158022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.158048 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.158067 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:46Z","lastTransitionTime":"2026-02-17T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.261866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.261948 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.261975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.262007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.262031 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:46Z","lastTransitionTime":"2026-02-17T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.365293 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.365388 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.365411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.365446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.365468 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:46Z","lastTransitionTime":"2026-02-17T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.469127 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.469192 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.469256 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.469284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.469302 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:46Z","lastTransitionTime":"2026-02-17T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.571711 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.571769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.571787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.571813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.571833 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:46Z","lastTransitionTime":"2026-02-17T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.674895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.674981 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.675009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.675045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.675065 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:46Z","lastTransitionTime":"2026-02-17T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.778278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.778389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.778415 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.778444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.778466 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:46Z","lastTransitionTime":"2026-02-17T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.881617 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.881702 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.881726 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.881757 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.881778 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:46Z","lastTransitionTime":"2026-02-17T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.984252 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.984348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.984366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.984391 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:46 crc kubenswrapper[4813]: I0217 08:41:46.984406 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:46Z","lastTransitionTime":"2026-02-17T08:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.069242 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 00:27:09.26731432 +0000 UTC Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.087488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.087555 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.087578 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.087609 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.087630 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:47Z","lastTransitionTime":"2026-02-17T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.110185 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.110214 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:47 crc kubenswrapper[4813]: E0217 08:41:47.110386 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:47 crc kubenswrapper[4813]: E0217 08:41:47.110540 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.190764 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.190819 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.190838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.190861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.190878 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:47Z","lastTransitionTime":"2026-02-17T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.293748 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.293807 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.293824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.293848 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.293866 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:47Z","lastTransitionTime":"2026-02-17T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.397100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.397211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.397233 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.397256 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.397273 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:47Z","lastTransitionTime":"2026-02-17T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.500750 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.500809 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.500827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.500854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.500872 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:47Z","lastTransitionTime":"2026-02-17T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.603685 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.603749 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.603767 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.603790 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.603811 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:47Z","lastTransitionTime":"2026-02-17T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.706905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.706960 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.706976 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.707000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.707017 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:47Z","lastTransitionTime":"2026-02-17T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.809733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.809806 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.809833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.809875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.809899 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:47Z","lastTransitionTime":"2026-02-17T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.912607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.912671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.912691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.912717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:47 crc kubenswrapper[4813]: I0217 08:41:47.912735 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:47Z","lastTransitionTime":"2026-02-17T08:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.015439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.015511 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.015531 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.015555 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.015574 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:48Z","lastTransitionTime":"2026-02-17T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.070396 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 19:29:34.353883133 +0000 UTC Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.110706 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:48 crc kubenswrapper[4813]: E0217 08:41:48.110927 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.110726 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:48 crc kubenswrapper[4813]: E0217 08:41:48.111419 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.118968 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.119034 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.119055 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.119083 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.119105 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:48Z","lastTransitionTime":"2026-02-17T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.222000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.222062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.222084 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.222114 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.222136 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:48Z","lastTransitionTime":"2026-02-17T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.325445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.325499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.325516 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.325540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.325559 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:48Z","lastTransitionTime":"2026-02-17T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.429079 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.429136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.429154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.429180 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.429199 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:48Z","lastTransitionTime":"2026-02-17T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.539882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.539956 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.539975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.540002 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.540028 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:48Z","lastTransitionTime":"2026-02-17T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.643222 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.643281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.643302 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.643352 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.643370 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:48Z","lastTransitionTime":"2026-02-17T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.746005 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.746067 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.746091 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.746121 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.746141 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:48Z","lastTransitionTime":"2026-02-17T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.850589 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.850673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.850696 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.850728 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.850769 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:48Z","lastTransitionTime":"2026-02-17T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.953845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.953880 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.953888 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.953907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:48 crc kubenswrapper[4813]: I0217 08:41:48.953916 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:48Z","lastTransitionTime":"2026-02-17T08:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.057000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.057068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.057088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.057123 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.057148 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:49Z","lastTransitionTime":"2026-02-17T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.071547 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 07:26:29.365287466 +0000 UTC Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.110255 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.110282 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:49 crc kubenswrapper[4813]: E0217 08:41:49.110486 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:49 crc kubenswrapper[4813]: E0217 08:41:49.110611 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.160936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.161012 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.161034 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.161064 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.161083 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:49Z","lastTransitionTime":"2026-02-17T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.264344 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.264402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.264421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.264445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.264464 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:49Z","lastTransitionTime":"2026-02-17T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.368065 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.368154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.368173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.368202 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.368222 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:49Z","lastTransitionTime":"2026-02-17T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.471528 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.471641 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.471665 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.471730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.471752 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:49Z","lastTransitionTime":"2026-02-17T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.548350 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.548408 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.548427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.548450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.548467 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:49Z","lastTransitionTime":"2026-02-17T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:49 crc kubenswrapper[4813]: E0217 08:41:49.571896 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:49Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.577470 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.577522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.577540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.577566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.577585 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:49Z","lastTransitionTime":"2026-02-17T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:49 crc kubenswrapper[4813]: E0217 08:41:49.597775 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:49Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.603300 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.603431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.603449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.603479 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.603504 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:49Z","lastTransitionTime":"2026-02-17T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:49 crc kubenswrapper[4813]: E0217 08:41:49.625531 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:49Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.630980 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.631026 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.631045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.631067 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.631084 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:49Z","lastTransitionTime":"2026-02-17T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:49 crc kubenswrapper[4813]: E0217 08:41:49.656298 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:49Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.663184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.663258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.663277 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.663301 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.663362 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:49Z","lastTransitionTime":"2026-02-17T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:49 crc kubenswrapper[4813]: E0217 08:41:49.684457 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:49Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:49 crc kubenswrapper[4813]: E0217 08:41:49.684791 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.687385 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.687432 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.687453 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.687480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.687501 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:49Z","lastTransitionTime":"2026-02-17T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.790640 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.790686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.790703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.790724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.790740 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:49Z","lastTransitionTime":"2026-02-17T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.921400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.921444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.921460 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.921480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:49 crc kubenswrapper[4813]: I0217 08:41:49.921497 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:49Z","lastTransitionTime":"2026-02-17T08:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.024839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.024891 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.024908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.024931 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.024948 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:50Z","lastTransitionTime":"2026-02-17T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.071733 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 12:31:52.305529926 +0000 UTC Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.110389 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.110602 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:50 crc kubenswrapper[4813]: E0217 08:41:50.110804 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:50 crc kubenswrapper[4813]: E0217 08:41:50.110970 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.128141 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.128205 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.128229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.128259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.128281 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:50Z","lastTransitionTime":"2026-02-17T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.231450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.231519 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.231542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.231579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.231605 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:50Z","lastTransitionTime":"2026-02-17T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.335009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.335084 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.335108 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.335132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.335150 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:50Z","lastTransitionTime":"2026-02-17T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.438475 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.438532 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.438552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.438577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.438594 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:50Z","lastTransitionTime":"2026-02-17T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.540907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.540984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.541010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.541035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.541053 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:50Z","lastTransitionTime":"2026-02-17T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.644205 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.644259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.644277 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.644300 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.644345 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:50Z","lastTransitionTime":"2026-02-17T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.747527 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.747584 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.747603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.747626 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.747643 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:50Z","lastTransitionTime":"2026-02-17T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.851358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.851420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.851442 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.851468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.851486 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:50Z","lastTransitionTime":"2026-02-17T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.954487 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.954557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.954582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.954615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:50 crc kubenswrapper[4813]: I0217 08:41:50.954639 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:50Z","lastTransitionTime":"2026-02-17T08:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.057583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.057640 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.057657 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.057686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.057709 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:51Z","lastTransitionTime":"2026-02-17T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.072369 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 08:45:41.968010677 +0000 UTC Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.110891 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.111276 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:51 crc kubenswrapper[4813]: E0217 08:41:51.112135 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:51 crc kubenswrapper[4813]: E0217 08:41:51.113003 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.160782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.160859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.160883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.160918 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.160943 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:51Z","lastTransitionTime":"2026-02-17T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.264629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.264701 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.264725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.264760 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.264784 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:51Z","lastTransitionTime":"2026-02-17T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.367852 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.367907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.367920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.367939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.367951 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:51Z","lastTransitionTime":"2026-02-17T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.470828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.470888 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.470905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.470930 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.470946 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:51Z","lastTransitionTime":"2026-02-17T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.574363 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.574433 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.574453 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.574479 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.574496 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:51Z","lastTransitionTime":"2026-02-17T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.677346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.677386 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.677395 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.677412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.677421 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:51Z","lastTransitionTime":"2026-02-17T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.780828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.782046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.782093 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.782121 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:51 crc kubenswrapper[4813]: I0217 08:41:51.782143 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:51Z","lastTransitionTime":"2026-02-17T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:51.885669 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:51.885733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:51.885749 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:51.885777 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:51.885803 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:51Z","lastTransitionTime":"2026-02-17T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:51.988789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:51.988849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:51.988868 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:51.988896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:51.988917 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:51Z","lastTransitionTime":"2026-02-17T08:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.072814 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 09:49:41.393142875 +0000 UTC Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.091844 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.091883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.091895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.091913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.091923 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:52Z","lastTransitionTime":"2026-02-17T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.110322 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.110419 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:52 crc kubenswrapper[4813]: E0217 08:41:52.110508 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:41:52 crc kubenswrapper[4813]: E0217 08:41:52.110635 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.194943 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.195003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.195020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.195046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.195063 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:52Z","lastTransitionTime":"2026-02-17T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.298255 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.298337 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.298361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.298391 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.298414 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:52Z","lastTransitionTime":"2026-02-17T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.401830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.401912 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.401930 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.401964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.401988 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:52Z","lastTransitionTime":"2026-02-17T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.504395 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.504459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.504480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.504507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.504525 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:52Z","lastTransitionTime":"2026-02-17T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.607161 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.607227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.607237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.607257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.607272 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:52Z","lastTransitionTime":"2026-02-17T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.710922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.711009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.711030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.711061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.711084 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:52Z","lastTransitionTime":"2026-02-17T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.814119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.814188 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.814205 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.814229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.814248 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:52Z","lastTransitionTime":"2026-02-17T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.917754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.917837 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.917853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.917882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:52 crc kubenswrapper[4813]: I0217 08:41:52.917899 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:52Z","lastTransitionTime":"2026-02-17T08:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.022074 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.022132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.022151 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.022179 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.022196 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:53Z","lastTransitionTime":"2026-02-17T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.072951 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:29:27.995115957 +0000 UTC Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.110405 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.110492 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:53 crc kubenswrapper[4813]: E0217 08:41:53.110687 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:53 crc kubenswrapper[4813]: E0217 08:41:53.110850 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.125676 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.125737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.125754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.125776 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.125796 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:53Z","lastTransitionTime":"2026-02-17T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.137565 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.157233 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.178443 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.203859 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:40Z\\\",\\\"message\\\":\\\"aiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 08:41:40.115817 6477 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-srrq7 before timer (time: 2026-02-17 08:41:41.255882976 +0000 UTC m=+1.828758339): skip\\\\nI0217 08:41:40.115924 6477 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 149.213µs)\\\\nI0217 08:41:40.115784 6477 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 08:41:40.115816 6477 ovnkube.go:599] Stopped ovnkube\\\\nI0217 08:41:40.116076 6477 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 08:41:40.116179 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.222013 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.228194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.228229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.228241 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.228256 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.228266 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:53Z","lastTransitionTime":"2026-02-17T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.254919 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.272125 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.291380 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.316798 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.331213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.331258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.331278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.331303 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.331347 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:53Z","lastTransitionTime":"2026-02-17T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.335773 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.353634 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.374851 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.394820 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d656c08-9eaf-4e82-85a6-e55c5db21ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7afd399e353b112d2e6f6cce0e4a33a6a441dbe9aa5ed71b9a3d97dc2b5ccad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20515bd60acb180cb02d22db9ef7b9556ca5b2747ae85d57b78afdc866987007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94eaf80152a93f49d2f1a1e0e90c908ff589a8333803e08fa1c1d2a13122d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.418585 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.434486 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.434550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.434575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.434605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.434625 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:53Z","lastTransitionTime":"2026-02-17T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.443191 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.469529 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.497988 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.520143 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:53Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.537954 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.538021 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.538042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.538070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.538089 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:53Z","lastTransitionTime":"2026-02-17T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.641896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.641950 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.641965 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.641989 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.642006 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:53Z","lastTransitionTime":"2026-02-17T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.745406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.745471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.745490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.745562 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.745582 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:53Z","lastTransitionTime":"2026-02-17T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.849516 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.849603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.849626 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.849658 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.849682 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:53Z","lastTransitionTime":"2026-02-17T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.952892 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.952955 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.952972 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.952997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:53 crc kubenswrapper[4813]: I0217 08:41:53.953015 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:53Z","lastTransitionTime":"2026-02-17T08:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.055756 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.055810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.055833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.055864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.055887 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:54Z","lastTransitionTime":"2026-02-17T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.073950 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 22:46:16.109049007 +0000 UTC Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.110234 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.110249 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:54 crc kubenswrapper[4813]: E0217 08:41:54.110427 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:41:54 crc kubenswrapper[4813]: E0217 08:41:54.110569 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.159910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.159974 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.159991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.160018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.160036 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:54Z","lastTransitionTime":"2026-02-17T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.262585 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.262645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.262662 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.262687 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.262704 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:54Z","lastTransitionTime":"2026-02-17T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.365776 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.365852 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.365870 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.365895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.365914 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:54Z","lastTransitionTime":"2026-02-17T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.469383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.469468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.469540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.469592 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.469618 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:54Z","lastTransitionTime":"2026-02-17T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.572865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.572930 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.572949 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.572974 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.572993 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:54Z","lastTransitionTime":"2026-02-17T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.675557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.675609 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.675626 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.675654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.675676 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:54Z","lastTransitionTime":"2026-02-17T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.779166 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.779239 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.779257 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.779286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.779304 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:54Z","lastTransitionTime":"2026-02-17T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.882677 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.882741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.882761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.882786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:54 crc kubenswrapper[4813]: I0217 08:41:54.882804 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:54Z","lastTransitionTime":"2026-02-17T08:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.001929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.001992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.002010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.002034 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.002051 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:55Z","lastTransitionTime":"2026-02-17T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.074937 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 00:24:41.974955381 +0000 UTC Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.104800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.104878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.104895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.104922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.104942 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:55Z","lastTransitionTime":"2026-02-17T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.111222 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.111273 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:55 crc kubenswrapper[4813]: E0217 08:41:55.111455 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:55 crc kubenswrapper[4813]: E0217 08:41:55.111718 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.112835 4813 scope.go:117] "RemoveContainer" containerID="d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f" Feb 17 08:41:55 crc kubenswrapper[4813]: E0217 08:41:55.113223 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.207832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.207891 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.207908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.207932 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.207950 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:55Z","lastTransitionTime":"2026-02-17T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.310483 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.310561 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.310579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.310607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.310630 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:55Z","lastTransitionTime":"2026-02-17T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.413339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.413402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.413419 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.413449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.413468 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:55Z","lastTransitionTime":"2026-02-17T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.516136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.516177 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.516231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.516252 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.516270 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:55Z","lastTransitionTime":"2026-02-17T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.618702 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.618789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.618807 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.618836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.618854 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:55Z","lastTransitionTime":"2026-02-17T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.724765 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.724815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.724835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.724863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.724895 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:55Z","lastTransitionTime":"2026-02-17T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.827921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.827994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.828018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.828046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.828064 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:55Z","lastTransitionTime":"2026-02-17T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.930781 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.930832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.930855 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.930882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:55 crc kubenswrapper[4813]: I0217 08:41:55.930904 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:55Z","lastTransitionTime":"2026-02-17T08:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.034246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.034355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.034380 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.034413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.034441 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:56Z","lastTransitionTime":"2026-02-17T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.075049 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:25:51.009543837 +0000 UTC Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.111024 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.111047 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:56 crc kubenswrapper[4813]: E0217 08:41:56.111206 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:56 crc kubenswrapper[4813]: E0217 08:41:56.111456 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.137467 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.137515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.137529 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.137552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.137568 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:56Z","lastTransitionTime":"2026-02-17T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.240265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.240342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.240356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.240375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.240388 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:56Z","lastTransitionTime":"2026-02-17T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.343496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.343563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.343579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.343600 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.343616 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:56Z","lastTransitionTime":"2026-02-17T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.445904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.446374 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.446522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.446674 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.446815 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:56Z","lastTransitionTime":"2026-02-17T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.550990 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.551069 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.551094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.551127 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.551151 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:56Z","lastTransitionTime":"2026-02-17T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.654923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.655006 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.655031 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.655060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.655082 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:56Z","lastTransitionTime":"2026-02-17T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.758239 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.758355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.758386 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.758420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.758441 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:56Z","lastTransitionTime":"2026-02-17T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.861685 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.862048 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.862215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.862645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.862944 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:56Z","lastTransitionTime":"2026-02-17T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.966221 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.966272 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.966289 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.966346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:56 crc kubenswrapper[4813]: I0217 08:41:56.966366 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:56Z","lastTransitionTime":"2026-02-17T08:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.069292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.069731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.069875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.070005 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.070376 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:57Z","lastTransitionTime":"2026-02-17T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.075530 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 21:42:39.459433315 +0000 UTC Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.111278 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.111374 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:57 crc kubenswrapper[4813]: E0217 08:41:57.111825 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:57 crc kubenswrapper[4813]: E0217 08:41:57.111665 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.173359 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.173398 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.173407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.173420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.173429 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:57Z","lastTransitionTime":"2026-02-17T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.275691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.276064 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.276211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.276403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.276555 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:57Z","lastTransitionTime":"2026-02-17T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.379331 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.379373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.379382 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.379396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.379406 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:57Z","lastTransitionTime":"2026-02-17T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.481849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.482175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.482368 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.482608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.482766 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:57Z","lastTransitionTime":"2026-02-17T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.585935 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.586025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.586042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.586099 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.586119 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:57Z","lastTransitionTime":"2026-02-17T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.689273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.689404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.689423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.689446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.689494 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:57Z","lastTransitionTime":"2026-02-17T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.793116 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.793174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.793193 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.793218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.793238 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:57Z","lastTransitionTime":"2026-02-17T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.896557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.896624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.896641 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.896666 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.896685 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:57Z","lastTransitionTime":"2026-02-17T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.999432 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.999471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.999480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.999494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:57 crc kubenswrapper[4813]: I0217 08:41:57.999504 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:57Z","lastTransitionTime":"2026-02-17T08:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.076391 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:24:45.377708101 +0000 UTC Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.102004 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.102086 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.102112 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.102150 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.102174 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:58Z","lastTransitionTime":"2026-02-17T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.110426 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.110476 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:41:58 crc kubenswrapper[4813]: E0217 08:41:58.110612 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:41:58 crc kubenswrapper[4813]: E0217 08:41:58.110827 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.204771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.204841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.204863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.204893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.204914 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:58Z","lastTransitionTime":"2026-02-17T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.307618 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.307676 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.307693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.307717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.307734 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:58Z","lastTransitionTime":"2026-02-17T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.410324 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.410362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.410371 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.410385 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.410394 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:58Z","lastTransitionTime":"2026-02-17T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.512639 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.512703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.512721 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.512746 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.512763 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:58Z","lastTransitionTime":"2026-02-17T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.615432 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.615499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.615516 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.615542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.615560 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:58Z","lastTransitionTime":"2026-02-17T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.718412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.718481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.718504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.718532 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.718549 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:58Z","lastTransitionTime":"2026-02-17T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.820643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.820711 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.820738 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.820765 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.820786 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:58Z","lastTransitionTime":"2026-02-17T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.923813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.923871 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.923889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.923914 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:58 crc kubenswrapper[4813]: I0217 08:41:58.923933 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:58Z","lastTransitionTime":"2026-02-17T08:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.028717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.028804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.028831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.028865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.028891 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:59Z","lastTransitionTime":"2026-02-17T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.076492 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 03:06:44.387090326 +0000 UTC Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.111513 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:41:59 crc kubenswrapper[4813]: E0217 08:41:59.111736 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.112203 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:41:59 crc kubenswrapper[4813]: E0217 08:41:59.112359 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.131580 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.131614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.131630 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.131654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.131670 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:59Z","lastTransitionTime":"2026-02-17T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.234271 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.234318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.234327 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.234341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.234350 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:59Z","lastTransitionTime":"2026-02-17T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.336712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.336762 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.336774 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.336796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.336809 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:59Z","lastTransitionTime":"2026-02-17T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.439447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.439501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.439514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.439532 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.439543 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:59Z","lastTransitionTime":"2026-02-17T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.542942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.543007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.543025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.543047 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.543065 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:59Z","lastTransitionTime":"2026-02-17T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.644516 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.644549 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.644560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.644575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.644585 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:59Z","lastTransitionTime":"2026-02-17T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.746834 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.746904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.746922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.746947 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.746964 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:59Z","lastTransitionTime":"2026-02-17T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.849464 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.849522 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.849541 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.849568 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.849585 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:59Z","lastTransitionTime":"2026-02-17T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.931024 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.931087 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.931106 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.931130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.931147 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:59Z","lastTransitionTime":"2026-02-17T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:59 crc kubenswrapper[4813]: E0217 08:41:59.948720 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.952122 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.952171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.952190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.952213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.952230 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:59Z","lastTransitionTime":"2026-02-17T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:59 crc kubenswrapper[4813]: E0217 08:41:59.969530 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.973439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.973496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.973514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.973539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.973557 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:59Z","lastTransitionTime":"2026-02-17T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:41:59 crc kubenswrapper[4813]: E0217 08:41:59.989295 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:41:59Z is after 2025-08-24T17:21:41Z" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.993408 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.993443 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.993455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.993472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:41:59 crc kubenswrapper[4813]: I0217 08:41:59.993483 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:41:59Z","lastTransitionTime":"2026-02-17T08:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:00 crc kubenswrapper[4813]: E0217 08:42:00.009639 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.013807 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.013841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.013851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.013865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.013877 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:00Z","lastTransitionTime":"2026-02-17T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:00 crc kubenswrapper[4813]: E0217 08:42:00.031990 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:00Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:00 crc kubenswrapper[4813]: E0217 08:42:00.032111 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.033686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.033715 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.033725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.033737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.033745 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:00Z","lastTransitionTime":"2026-02-17T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.077368 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 06:03:39.526384309 +0000 UTC Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.110804 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.110875 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:00 crc kubenswrapper[4813]: E0217 08:42:00.111184 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:00 crc kubenswrapper[4813]: E0217 08:42:00.111414 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.122621 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.137118 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.137149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.137158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.137171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.137181 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:00Z","lastTransitionTime":"2026-02-17T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.239922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.239968 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.239977 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.239992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.240003 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:00Z","lastTransitionTime":"2026-02-17T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.248594 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs\") pod \"network-metrics-daemon-srrq7\" (UID: \"b42b143b-e85b-44cc-a427-ba1ebd82c55b\") " pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:00 crc kubenswrapper[4813]: E0217 08:42:00.248790 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 08:42:00 crc kubenswrapper[4813]: E0217 08:42:00.248914 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs podName:b42b143b-e85b-44cc-a427-ba1ebd82c55b nodeName:}" failed. No retries permitted until 2026-02-17 08:42:32.248885953 +0000 UTC m=+99.909647216 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs") pod "network-metrics-daemon-srrq7" (UID: "b42b143b-e85b-44cc-a427-ba1ebd82c55b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.342962 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.343096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.343126 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.343155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.343176 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:00Z","lastTransitionTime":"2026-02-17T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.446634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.446741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.446763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.446791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.446812 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:00Z","lastTransitionTime":"2026-02-17T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.550130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.550185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.550201 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.550224 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.550241 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:00Z","lastTransitionTime":"2026-02-17T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.652194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.652234 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.652244 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.652262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.652272 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:00Z","lastTransitionTime":"2026-02-17T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.755167 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.755225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.755242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.755265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.755282 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:00Z","lastTransitionTime":"2026-02-17T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.857966 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.858027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.858050 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.858080 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.858102 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:00Z","lastTransitionTime":"2026-02-17T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.959800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.959848 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.959858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.959877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:00 crc kubenswrapper[4813]: I0217 08:42:00.959889 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:00Z","lastTransitionTime":"2026-02-17T08:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.061703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.061759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.061771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.061787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.061798 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:01Z","lastTransitionTime":"2026-02-17T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.078425 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 18:37:25.354264101 +0000 UTC Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.110826 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.110903 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:01 crc kubenswrapper[4813]: E0217 08:42:01.110962 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:01 crc kubenswrapper[4813]: E0217 08:42:01.111104 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.165798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.165864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.165885 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.165919 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.165943 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:01Z","lastTransitionTime":"2026-02-17T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.268418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.268460 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.268468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.268483 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.268492 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:01Z","lastTransitionTime":"2026-02-17T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.370694 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.370734 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.370746 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.370762 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.370773 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:01Z","lastTransitionTime":"2026-02-17T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.472417 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.472450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.472459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.472474 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.472486 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:01Z","lastTransitionTime":"2026-02-17T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.572364 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swpdn_9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0/kube-multus/0.log" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.572424 4813 generic.go:334] "Generic (PLEG): container finished" podID="9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0" containerID="fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc" exitCode=1 Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.572459 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-swpdn" event={"ID":"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0","Type":"ContainerDied","Data":"fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc"} Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.572889 4813 scope.go:117] "RemoveContainer" containerID="fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.574962 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.574992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.575000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.575016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.575026 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:01Z","lastTransitionTime":"2026-02-17T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.590982 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.606673 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.620274 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.641699 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:40Z\\\",\\\"message\\\":\\\"aiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 08:41:40.115817 6477 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-srrq7 before timer (time: 2026-02-17 08:41:41.255882976 +0000 UTC m=+1.828758339): skip\\\\nI0217 08:41:40.115924 6477 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 149.213µs)\\\\nI0217 08:41:40.115784 6477 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 08:41:40.115816 6477 ovnkube.go:599] Stopped ovnkube\\\\nI0217 08:41:40.116076 6477 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 08:41:40.116179 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.657173 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.674461 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.678587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.678625 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.678634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.678652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.678661 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:01Z","lastTransitionTime":"2026-02-17T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.688543 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.700109 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.711832 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.725688 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:42:01Z\\\",\\\"message\\\":\\\"2026-02-17T08:41:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_79ca2e6e-cdb4-4f2c-b675-ba50311bc499\\\\n2026-02-17T08:41:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_79ca2e6e-cdb4-4f2c-b675-ba50311bc499 to /host/opt/cni/bin/\\\\n2026-02-17T08:41:16Z [verbose] multus-daemon started\\\\n2026-02-17T08:41:16Z [verbose] Readiness Indicator file check\\\\n2026-02-17T08:42:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.735585 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.745294 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.753668 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7656230f-21ef-4fcf-8ead-2329a2a4be2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ddb252c2ed9e53e9c1fffa1b4ff0929f35762930bc4d83d6fe9659653ed5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.770187 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.781115 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.781154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.781162 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.781178 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.781189 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:01Z","lastTransitionTime":"2026-02-17T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.782182 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.793071 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.806830 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.820482 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.830656 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d656c08-9eaf-4e82-85a6-e55c5db21ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7afd399e353b112d2e6f6cce0e4a33a6a441dbe9aa5ed71b9a3d97dc2b5ccad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20515bd60acb180cb02d22db9ef7b9556ca5b2747ae85d57b78afdc866987007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94eaf80152a93f49d2f1a1e0e90c908ff589a8333803e08fa1c1d2a13122d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:01Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.883394 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.883479 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.883498 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.883526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.883545 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:01Z","lastTransitionTime":"2026-02-17T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.985900 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.985931 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.985939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.985955 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:01 crc kubenswrapper[4813]: I0217 08:42:01.985964 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:01Z","lastTransitionTime":"2026-02-17T08:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.079078 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:13:32.449519507 +0000 UTC Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.088140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.088195 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.088208 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.088229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.088244 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:02Z","lastTransitionTime":"2026-02-17T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.110451 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.110476 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:02 crc kubenswrapper[4813]: E0217 08:42:02.110577 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:02 crc kubenswrapper[4813]: E0217 08:42:02.110716 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.190334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.190390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.190404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.190426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.190440 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:02Z","lastTransitionTime":"2026-02-17T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.293333 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.293372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.293381 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.293397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.293409 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:02Z","lastTransitionTime":"2026-02-17T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.396172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.396232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.396254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.396282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.396300 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:02Z","lastTransitionTime":"2026-02-17T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.499369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.499427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.499448 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.499476 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.499497 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:02Z","lastTransitionTime":"2026-02-17T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.580192 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swpdn_9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0/kube-multus/0.log" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.580290 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-swpdn" event={"ID":"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0","Type":"ContainerStarted","Data":"05624bdef750b78c738592304644643c1215e4d7441442c786ced24049d9ce40"} Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.603138 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.605536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.605600 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.605627 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.605657 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.605679 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:02Z","lastTransitionTime":"2026-02-17T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.615900 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.632909 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.660013 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:40Z\\\",\\\"message\\\":\\\"aiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 08:41:40.115817 6477 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-srrq7 before timer (time: 2026-02-17 08:41:41.255882976 +0000 UTC m=+1.828758339): skip\\\\nI0217 08:41:40.115924 6477 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 149.213µs)\\\\nI0217 08:41:40.115784 6477 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 08:41:40.115816 6477 ovnkube.go:599] Stopped ovnkube\\\\nI0217 08:41:40.116076 6477 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 08:41:40.116179 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.674821 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.685898 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.695809 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.704546 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7656230f-21ef-4fcf-8ead-2329a2a4be2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ddb252c2ed9e53e9c1fffa1b4ff0929f35762930bc4d83d6fe9659653ed5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.709852 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.709896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.709905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.709921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.709932 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:02Z","lastTransitionTime":"2026-02-17T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.724784 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.737491 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.751606 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.767525 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05624bdef750b78c738592304644643c1215e4d7441442c786ced24049d9ce40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:42:01Z\\\",\\\"message\\\":\\\"2026-02-17T08:41:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_79ca2e6e-cdb4-4f2c-b675-ba50311bc499\\\\n2026-02-17T08:41:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_79ca2e6e-cdb4-4f2c-b675-ba50311bc499 to /host/opt/cni/bin/\\\\n2026-02-17T08:41:16Z [verbose] multus-daemon started\\\\n2026-02-17T08:41:16Z [verbose] Readiness Indicator file check\\\\n2026-02-17T08:42:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.777448 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.793175 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d656c08-9eaf-4e82-85a6-e55c5db21ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7afd399e353b112d2e6f6cce0e4a33a6a441dbe9aa5ed71b9a3d97dc2b5ccad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20515bd60acb180cb02d22db9ef7b9556ca5b2747ae85d57b78afdc866987007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94eaf80152a93f49d2f1a1e0e90c908ff589a8333803e08fa1c1d2a13122d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.803753 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.813284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.813326 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.813335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.813350 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.813362 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:02Z","lastTransitionTime":"2026-02-17T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.815083 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.830665 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.842560 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.855253 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:02Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.916698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.916745 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.916757 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.916775 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:02 crc kubenswrapper[4813]: I0217 08:42:02.916787 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:02Z","lastTransitionTime":"2026-02-17T08:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.019401 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.019439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.019448 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.019461 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.019470 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:03Z","lastTransitionTime":"2026-02-17T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.079816 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:26:37.091255844 +0000 UTC Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.110600 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.110631 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:03 crc kubenswrapper[4813]: E0217 08:42:03.110722 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:03 crc kubenswrapper[4813]: E0217 08:42:03.110857 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.122064 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.122134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.122156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.122187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.122208 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:03Z","lastTransitionTime":"2026-02-17T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.130096 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d656c08-9eaf-4e82-85a6-e55c5db21ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7afd399e353b112d2e6f6cce0e4a33a6a441dbe9aa5ed71b9a3d97dc2b5ccad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20515bd60acb180cb02d22db9ef7b9556ca5b2747ae85d57b78afdc866987007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94eaf80152a93f49d2f1a1e0e90c908ff589a8333803e08fa1c1d2a13122d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.148765 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.160999 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.177633 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.187977 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.200083 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.213177 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.223963 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.223997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.224006 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.224020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.224030 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:03Z","lastTransitionTime":"2026-02-17T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.226081 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.239919 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.281565 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:40Z\\\",\\\"message\\\":\\\"aiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 08:41:40.115817 6477 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-srrq7 before timer (time: 2026-02-17 08:41:41.255882976 +0000 UTC m=+1.828758339): skip\\\\nI0217 08:41:40.115924 6477 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 149.213µs)\\\\nI0217 08:41:40.115784 6477 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 08:41:40.115816 6477 ovnkube.go:599] Stopped ovnkube\\\\nI0217 08:41:40.116076 6477 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 08:41:40.116179 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.304424 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.321945 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.325797 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.325834 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.325843 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.325857 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.325867 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:03Z","lastTransitionTime":"2026-02-17T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.343350 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.352665 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.364575 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.375154 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05624bdef750b78c738592304644643c1215e4d7441442c786ced24049d9ce40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:42:01Z\\\",\\\"message\\\":\\\"2026-02-17T08:41:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_79ca2e6e-cdb4-4f2c-b675-ba50311bc499\\\\n2026-02-17T08:41:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_79ca2e6e-cdb4-4f2c-b675-ba50311bc499 to /host/opt/cni/bin/\\\\n2026-02-17T08:41:16Z [verbose] multus-daemon started\\\\n2026-02-17T08:41:16Z [verbose] Readiness Indicator file check\\\\n2026-02-17T08:42:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.386001 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.397123 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.405791 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7656230f-21ef-4fcf-8ead-2329a2a4be2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ddb252c2ed9e53e9c1fffa1b4ff0929f35762930bc4d83d6fe9659653ed5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:03Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.428802 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.428830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.428839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.428852 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.428861 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:03Z","lastTransitionTime":"2026-02-17T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.530194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.530226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.530235 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.530248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.530258 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:03Z","lastTransitionTime":"2026-02-17T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.633159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.633284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.633303 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.633363 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.633381 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:03Z","lastTransitionTime":"2026-02-17T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.736346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.736421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.736523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.736607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.736660 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:03Z","lastTransitionTime":"2026-02-17T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.839491 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.839521 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.839529 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.839542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.839552 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:03Z","lastTransitionTime":"2026-02-17T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.942509 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.942569 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.942587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.942610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:03 crc kubenswrapper[4813]: I0217 08:42:03.942628 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:03Z","lastTransitionTime":"2026-02-17T08:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.045516 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.045570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.045586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.045610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.045627 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:04Z","lastTransitionTime":"2026-02-17T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.080967 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:45:40.098095513 +0000 UTC Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.110743 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.110824 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:04 crc kubenswrapper[4813]: E0217 08:42:04.110888 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:04 crc kubenswrapper[4813]: E0217 08:42:04.111019 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.148699 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.148759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.148779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.148804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.148821 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:04Z","lastTransitionTime":"2026-02-17T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.251450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.251515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.251534 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.251559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.251580 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:04Z","lastTransitionTime":"2026-02-17T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.353483 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.353534 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.353548 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.353567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.353581 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:04Z","lastTransitionTime":"2026-02-17T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.457166 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.457219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.457233 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.457253 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.457266 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:04Z","lastTransitionTime":"2026-02-17T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.560040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.560086 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.560098 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.560117 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.560130 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:04Z","lastTransitionTime":"2026-02-17T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.662034 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.662083 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.662094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.662111 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.662121 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:04Z","lastTransitionTime":"2026-02-17T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.764403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.764455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.764467 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.764487 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.764500 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:04Z","lastTransitionTime":"2026-02-17T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.866583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.866635 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.866648 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.866667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.866680 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:04Z","lastTransitionTime":"2026-02-17T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.968946 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.969046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.969062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.969085 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:04 crc kubenswrapper[4813]: I0217 08:42:04.969102 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:04Z","lastTransitionTime":"2026-02-17T08:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.072482 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.072560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.072582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.072614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.072638 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:05Z","lastTransitionTime":"2026-02-17T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.081160 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 04:44:32.327643848 +0000 UTC Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.110716 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.110777 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:05 crc kubenswrapper[4813]: E0217 08:42:05.110915 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:05 crc kubenswrapper[4813]: E0217 08:42:05.111028 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.176338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.176418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.176441 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.176472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.176493 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:05Z","lastTransitionTime":"2026-02-17T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.279381 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.279429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.279441 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.279459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.279472 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:05Z","lastTransitionTime":"2026-02-17T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.381655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.381723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.381746 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.381775 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.381799 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:05Z","lastTransitionTime":"2026-02-17T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.484128 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.484172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.484181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.484199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.484210 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:05Z","lastTransitionTime":"2026-02-17T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.586470 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.586510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.586520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.586537 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.586549 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:05Z","lastTransitionTime":"2026-02-17T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.690080 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.690512 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.690675 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.690899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.691054 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:05Z","lastTransitionTime":"2026-02-17T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.793544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.793598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.793610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.793634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.793646 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:05Z","lastTransitionTime":"2026-02-17T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.896454 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.896537 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.896550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.896568 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.896580 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:05Z","lastTransitionTime":"2026-02-17T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.998848 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.998885 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.998896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.998911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:05 crc kubenswrapper[4813]: I0217 08:42:05.998923 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:05Z","lastTransitionTime":"2026-02-17T08:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.082059 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:11:00.804192792 +0000 UTC Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.100459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.100511 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.100547 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.100566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.100578 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:06Z","lastTransitionTime":"2026-02-17T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.110387 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.110396 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:06 crc kubenswrapper[4813]: E0217 08:42:06.110676 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:06 crc kubenswrapper[4813]: E0217 08:42:06.110917 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.111465 4813 scope.go:117] "RemoveContainer" containerID="d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.202649 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.202745 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.202763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.202827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.202851 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:06Z","lastTransitionTime":"2026-02-17T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.305532 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.305558 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.305566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.305581 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.305591 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:06Z","lastTransitionTime":"2026-02-17T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.408207 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.408240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.408250 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.408264 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.408274 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:06Z","lastTransitionTime":"2026-02-17T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.510661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.510694 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.510704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.510717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.510727 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:06Z","lastTransitionTime":"2026-02-17T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.593747 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovnkube-controller/2.log" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.595759 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerStarted","Data":"fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532"} Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.596659 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.612055 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.612217 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.612280 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.612363 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.612421 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:06Z","lastTransitionTime":"2026-02-17T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.613379 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.630267 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.648650 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05624bdef750b78c738592304644643c1215e4d7441442c786ced24049d9ce40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:42:01Z\\\",\\\"message\\\":\\\"2026-02-17T08:41:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_79ca2e6e-cdb4-4f2c-b675-ba50311bc499\\\\n2026-02-17T08:41:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_79ca2e6e-cdb4-4f2c-b675-ba50311bc499 to /host/opt/cni/bin/\\\\n2026-02-17T08:41:16Z [verbose] multus-daemon started\\\\n2026-02-17T08:41:16Z [verbose] Readiness Indicator file check\\\\n2026-02-17T08:42:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.660411 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.673805 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.686419 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7656230f-21ef-4fcf-8ead-2329a2a4be2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ddb252c2ed9e53e9c1fffa1b4ff0929f35762930bc4d83d6fe9659653ed5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.706550 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.715089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.715324 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.715541 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.715717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.715900 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:06Z","lastTransitionTime":"2026-02-17T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.721818 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.732888 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.751024 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.765046 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.780487 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d656c08-9eaf-4e82-85a6-e55c5db21ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7afd399e353b112d2e6f6cce0e4a33a6a441dbe9aa5ed71b9a3d97dc2b5ccad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20515bd60acb180cb02d22db9ef7b9556ca5b2747ae85d57b78afdc866987007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94eaf80152a93f49d2f1a1e0e90c908ff589a8333803e08fa1c1d2a13122d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.793808 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.806545 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.818303 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.818395 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.818405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.818428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.818440 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:06Z","lastTransitionTime":"2026-02-17T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.823142 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.843425 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:40Z\\\",\\\"message\\\":\\\"aiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 08:41:40.115817 6477 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-srrq7 before timer (time: 2026-02-17 08:41:41.255882976 +0000 UTC m=+1.828758339): skip\\\\nI0217 08:41:40.115924 6477 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 149.213µs)\\\\nI0217 08:41:40.115784 6477 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 08:41:40.115816 6477 ovnkube.go:599] Stopped ovnkube\\\\nI0217 08:41:40.116076 6477 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 08:41:40.116179 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.854839 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.869111 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.880083 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:06Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.920634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.920661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.920669 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.920681 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:06 crc kubenswrapper[4813]: I0217 08:42:06.920690 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:06Z","lastTransitionTime":"2026-02-17T08:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.023553 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.023588 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.023602 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.023622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.023637 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:07Z","lastTransitionTime":"2026-02-17T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.082420 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 22:18:34.966743289 +0000 UTC Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.110684 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:07 crc kubenswrapper[4813]: E0217 08:42:07.110820 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.110688 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:07 crc kubenswrapper[4813]: E0217 08:42:07.110942 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.126180 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.126247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.126270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.126299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.126377 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:07Z","lastTransitionTime":"2026-02-17T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.229151 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.229187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.229196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.229210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.229220 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:07Z","lastTransitionTime":"2026-02-17T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.332171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.332241 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.332262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.332286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.332305 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:07Z","lastTransitionTime":"2026-02-17T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.435753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.435809 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.435827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.435850 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.435870 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:07Z","lastTransitionTime":"2026-02-17T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.538351 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.538395 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.538407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.538423 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.538439 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:07Z","lastTransitionTime":"2026-02-17T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.601498 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovnkube-controller/3.log" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.602436 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovnkube-controller/2.log" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.605907 4813 generic.go:334] "Generic (PLEG): container finished" podID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerID="fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532" exitCode=1 Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.605967 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerDied","Data":"fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532"} Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.606016 4813 scope.go:117] "RemoveContainer" containerID="d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.607200 4813 scope.go:117] "RemoveContainer" containerID="fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532" Feb 17 08:42:07 crc kubenswrapper[4813]: E0217 08:42:07.607679 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.623164 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.639706 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.640811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.640859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.640874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.640896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.640912 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:07Z","lastTransitionTime":"2026-02-17T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.664304 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.696387 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d469c0ef143f96d4b5f4f2534d60b7e0bb74e27c8907cb97cf82bbe7b72b274f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:41:40Z\\\",\\\"message\\\":\\\"aiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0217 08:41:40.115817 6477 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-srrq7 before timer (time: 2026-02-17 08:41:41.255882976 +0000 UTC m=+1.828758339): skip\\\\nI0217 08:41:40.115924 6477 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 149.213µs)\\\\nI0217 08:41:40.115784 6477 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-oauth-apiserver/api]} name:Service_openshift-oauth-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.140:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {fe46cb89-4e54-4175-a112-1c5224cd299e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 08:41:40.115816 6477 ovnkube.go:599] Stopped ovnkube\\\\nI0217 08:41:40.116076 6477 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 08:41:40.116179 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:42:07Z\\\",\\\"message\\\":\\\"org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.161\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 08:42:06.979444 6860 services_controller.go:452] Built service openshift-config-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0217 08:42:06.979452 6860 services_controller.go:453] Built service openshift-config-operator/metrics template LB for network=default: []services.LB{}\\\\nI0217 08:42:06.978247 6860 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0217 08:42:06.979460 6860 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.713173 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.734496 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.743124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.743280 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.743353 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.743430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.743487 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:07Z","lastTransitionTime":"2026-02-17T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.750724 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.768542 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.788118 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.809126 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05624bdef750b78c738592304644643c1215e4d7441442c786ced24049d9ce40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:42:01Z\\\",\\\"message\\\":\\\"2026-02-17T08:41:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_79ca2e6e-cdb4-4f2c-b675-ba50311bc499\\\\n2026-02-17T08:41:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_79ca2e6e-cdb4-4f2c-b675-ba50311bc499 to /host/opt/cni/bin/\\\\n2026-02-17T08:41:16Z [verbose] multus-daemon started\\\\n2026-02-17T08:41:16Z [verbose] Readiness Indicator file check\\\\n2026-02-17T08:42:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.825065 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.840773 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.845907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.845969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.845994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.846025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.846047 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:07Z","lastTransitionTime":"2026-02-17T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.860117 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7656230f-21ef-4fcf-8ead-2329a2a4be2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ddb252c2ed9e53e9c1fffa1b4ff0929f35762930bc4d83d6fe9659653ed5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.893083 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.914057 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.933623 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.949683 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.949735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.949753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.949776 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.949794 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:07Z","lastTransitionTime":"2026-02-17T08:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.954721 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.974676 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:07 crc kubenswrapper[4813]: I0217 08:42:07.993498 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d656c08-9eaf-4e82-85a6-e55c5db21ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7afd399e353b112d2e6f6cce0e4a33a6a441dbe9aa5ed71b9a3d97dc2b5ccad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20515bd60acb180cb02d22db9ef7b9556ca5b2747ae85d57b78afdc866987007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94eaf80152a93f49d2f1a1e0e90c908ff589a8333803e08fa1c1d2a13122d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:07Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.052573 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.052629 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.052646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.052695 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.052714 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:08Z","lastTransitionTime":"2026-02-17T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.082873 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 02:50:20.602619109 +0000 UTC Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.110483 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:08 crc kubenswrapper[4813]: E0217 08:42:08.110648 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.110480 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:08 crc kubenswrapper[4813]: E0217 08:42:08.111023 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.156383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.156531 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.156661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.156779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.156903 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:08Z","lastTransitionTime":"2026-02-17T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.261805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.261881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.261902 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.261925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.261941 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:08Z","lastTransitionTime":"2026-02-17T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.369545 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.369620 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.369644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.369693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.369719 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:08Z","lastTransitionTime":"2026-02-17T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.472155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.472196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.472212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.472233 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.472249 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:08Z","lastTransitionTime":"2026-02-17T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.574722 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.574771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.574788 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.574809 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.574825 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:08Z","lastTransitionTime":"2026-02-17T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.612030 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovnkube-controller/3.log" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.617947 4813 scope.go:117] "RemoveContainer" containerID="fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532" Feb 17 08:42:08 crc kubenswrapper[4813]: E0217 08:42:08.618413 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.637054 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7656230f-21ef-4fcf-8ead-2329a2a4be2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ddb252c2ed9e53e9c1fffa1b4ff0929f35762930bc4d83d6fe9659653ed5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.670984 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.678014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.678153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.678176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.678208 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.678230 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:08Z","lastTransitionTime":"2026-02-17T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.691511 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.709819 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.731059 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05624bdef750b78c738592304644643c1215e4d7441442c786ced24049d9ce40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:42:01Z\\\",\\\"message\\\":\\\"2026-02-17T08:41:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_79ca2e6e-cdb4-4f2c-b675-ba50311bc499\\\\n2026-02-17T08:41:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_79ca2e6e-cdb4-4f2c-b675-ba50311bc499 to /host/opt/cni/bin/\\\\n2026-02-17T08:41:16Z [verbose] multus-daemon started\\\\n2026-02-17T08:41:16Z [verbose] Readiness Indicator file check\\\\n2026-02-17T08:42:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.747761 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.763027 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.780907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.780993 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.781018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.781051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.781075 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:08Z","lastTransitionTime":"2026-02-17T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.782535 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.795064 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d656c08-9eaf-4e82-85a6-e55c5db21ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7afd399e353b112d2e6f6cce0e4a33a6a441dbe9aa5ed71b9a3d97dc2b5ccad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20515bd60acb180cb02d22db9ef7b9556ca5b2747ae85d57b78afdc866987007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94eaf80152a93f49d2f1a1e0e90c908ff589a8333803e08fa1c1d2a13122d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.809743 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.823008 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.840044 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.853289 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.865608 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.881148 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.883779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.883830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.883841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.883858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.883869 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:08Z","lastTransitionTime":"2026-02-17T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.894084 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.907418 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.932170 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:42:07Z\\\",\\\"message\\\":\\\"org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.161\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 08:42:06.979444 6860 services_controller.go:452] Built service openshift-config-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0217 08:42:06.979452 6860 services_controller.go:453] Built service openshift-config-operator/metrics template LB for network=default: []services.LB{}\\\\nI0217 08:42:06.978247 6860 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0217 08:42:06.979460 6860 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:42:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.945191 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:08Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.986610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.986679 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.986703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.986732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:08 crc kubenswrapper[4813]: I0217 08:42:08.986754 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:08Z","lastTransitionTime":"2026-02-17T08:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.083377 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 00:26:01.444059262 +0000 UTC Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.089657 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.089729 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.089811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.089842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.089864 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:09Z","lastTransitionTime":"2026-02-17T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.110252 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.110255 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:09 crc kubenswrapper[4813]: E0217 08:42:09.110427 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:09 crc kubenswrapper[4813]: E0217 08:42:09.110553 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.193118 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.193281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.193303 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.193350 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.193368 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:09Z","lastTransitionTime":"2026-02-17T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.296912 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.296982 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.296999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.297028 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.297048 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:09Z","lastTransitionTime":"2026-02-17T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.399638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.399698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.399714 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.399739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.399757 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:09Z","lastTransitionTime":"2026-02-17T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.503652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.504071 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.504220 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.504412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.504541 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:09Z","lastTransitionTime":"2026-02-17T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.608095 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.608382 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.608549 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.608689 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.608850 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:09Z","lastTransitionTime":"2026-02-17T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.712607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.712680 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.712697 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.712724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.712742 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:09Z","lastTransitionTime":"2026-02-17T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.815615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.815655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.815664 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.815677 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.815690 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:09Z","lastTransitionTime":"2026-02-17T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.919251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.919345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.919367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.919392 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:09 crc kubenswrapper[4813]: I0217 08:42:09.919413 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:09Z","lastTransitionTime":"2026-02-17T08:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.022364 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.022428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.022446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.022472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.022492 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:10Z","lastTransitionTime":"2026-02-17T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.084173 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 06:23:27.28201615 +0000 UTC Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.111192 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:10 crc kubenswrapper[4813]: E0217 08:42:10.111456 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.111570 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:10 crc kubenswrapper[4813]: E0217 08:42:10.111691 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.147473 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.147559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.147580 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.147608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.147696 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:10Z","lastTransitionTime":"2026-02-17T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:10 crc kubenswrapper[4813]: E0217 08:42:10.171014 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.177065 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.177256 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.177737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.178122 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.178274 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:10Z","lastTransitionTime":"2026-02-17T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:10 crc kubenswrapper[4813]: E0217 08:42:10.198505 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.203560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.203610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.203628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.203653 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.203670 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:10Z","lastTransitionTime":"2026-02-17T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:10 crc kubenswrapper[4813]: E0217 08:42:10.222836 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.227844 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.228197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.228652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.228862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.229079 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:10Z","lastTransitionTime":"2026-02-17T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:10 crc kubenswrapper[4813]: E0217 08:42:10.250907 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.256684 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.256727 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.256739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.256757 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.256769 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:10Z","lastTransitionTime":"2026-02-17T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:10 crc kubenswrapper[4813]: E0217 08:42:10.274659 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:10Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:10 crc kubenswrapper[4813]: E0217 08:42:10.275188 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.277340 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.277384 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.277404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.277426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.277442 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:10Z","lastTransitionTime":"2026-02-17T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.380785 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.380845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.380861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.380885 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.380904 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:10Z","lastTransitionTime":"2026-02-17T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.484013 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.484069 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.484085 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.484107 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.484123 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:10Z","lastTransitionTime":"2026-02-17T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.586279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.586362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.586379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.586401 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.586420 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:10Z","lastTransitionTime":"2026-02-17T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.689772 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.689834 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.689853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.689876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.689893 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:10Z","lastTransitionTime":"2026-02-17T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.793068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.793461 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.793651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.793869 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.794132 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:10Z","lastTransitionTime":"2026-02-17T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.897458 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.897825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.897974 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.898130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:10 crc kubenswrapper[4813]: I0217 08:42:10.898389 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:10Z","lastTransitionTime":"2026-02-17T08:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.001716 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.001808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.001828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.001854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.001871 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:11Z","lastTransitionTime":"2026-02-17T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.085231 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:25:40.644590558 +0000 UTC Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.104677 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.104956 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.105124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.105304 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.105535 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:11Z","lastTransitionTime":"2026-02-17T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.110457 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.110614 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:11 crc kubenswrapper[4813]: E0217 08:42:11.110800 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:11 crc kubenswrapper[4813]: E0217 08:42:11.110901 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.208657 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.208739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.208767 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.208799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.208824 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:11Z","lastTransitionTime":"2026-02-17T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.311751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.311810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.311826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.311852 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.311868 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:11Z","lastTransitionTime":"2026-02-17T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.414544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.414628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.414650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.414676 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.414695 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:11Z","lastTransitionTime":"2026-02-17T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.517128 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.517242 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.517354 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.517390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.517489 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:11Z","lastTransitionTime":"2026-02-17T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.620073 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.620130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.620151 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.620175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.620192 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:11Z","lastTransitionTime":"2026-02-17T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.722968 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.723035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.723052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.723078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.723099 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:11Z","lastTransitionTime":"2026-02-17T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.825879 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.825972 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.825995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.826026 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.826046 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:11Z","lastTransitionTime":"2026-02-17T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.929069 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.929131 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.929149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.929175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:11 crc kubenswrapper[4813]: I0217 08:42:11.929201 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:11Z","lastTransitionTime":"2026-02-17T08:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.032036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.032149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.032176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.032206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.032228 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:12Z","lastTransitionTime":"2026-02-17T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.086027 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:21:15.33980235 +0000 UTC Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.110469 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.110567 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:12 crc kubenswrapper[4813]: E0217 08:42:12.110719 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:12 crc kubenswrapper[4813]: E0217 08:42:12.110955 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.135213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.135358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.135386 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.135419 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.135441 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:12Z","lastTransitionTime":"2026-02-17T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.238982 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.239061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.239079 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.239109 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.239169 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:12Z","lastTransitionTime":"2026-02-17T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.342092 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.342150 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.342168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.342191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.342209 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:12Z","lastTransitionTime":"2026-02-17T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.445228 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.445341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.445369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.445400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.445423 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:12Z","lastTransitionTime":"2026-02-17T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.559801 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.559864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.559886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.559914 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.559940 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:12Z","lastTransitionTime":"2026-02-17T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.663299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.663410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.663435 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.663460 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.663477 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:12Z","lastTransitionTime":"2026-02-17T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.766514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.766584 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.766602 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.766627 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.766645 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:12Z","lastTransitionTime":"2026-02-17T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.869718 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.869782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.869802 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.869826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.869842 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:12Z","lastTransitionTime":"2026-02-17T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.973008 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.973072 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.973092 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.973119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:12 crc kubenswrapper[4813]: I0217 08:42:12.973137 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:12Z","lastTransitionTime":"2026-02-17T08:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.076346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.076422 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.076446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.076474 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.076497 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:13Z","lastTransitionTime":"2026-02-17T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.086439 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 05:48:07.795981961 +0000 UTC Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.110569 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:13 crc kubenswrapper[4813]: E0217 08:42:13.110741 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.110823 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:13 crc kubenswrapper[4813]: E0217 08:42:13.110960 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.131827 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05624bdef750b78c738592304644643c1215e4d7441442c786ced24049d9ce40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:42:01Z\\\",\\\"message\\\":\\\"2026-02-17T08:41:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_79ca2e6e-cdb4-4f2c-b675-ba50311bc499\\\\n2026-02-17T08:41:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_79ca2e6e-cdb4-4f2c-b675-ba50311bc499 to /host/opt/cni/bin/\\\\n2026-02-17T08:41:16Z [verbose] multus-daemon started\\\\n2026-02-17T08:41:16Z [verbose] Readiness Indicator file check\\\\n2026-02-17T08:42:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.149418 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.165574 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.182033 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.182075 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.182092 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.182119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.182139 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:13Z","lastTransitionTime":"2026-02-17T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.183202 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7656230f-21ef-4fcf-8ead-2329a2a4be2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ddb252c2ed9e53e9c1fffa1b4ff0929f35762930bc4d83d6fe9659653ed5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.227764 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.251736 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.273717 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.285733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.285787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.285804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.285826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.285844 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:13Z","lastTransitionTime":"2026-02-17T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.294928 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.313471 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.331204 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d656c08-9eaf-4e82-85a6-e55c5db21ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7afd399e353b112d2e6f6cce0e4a33a6a441dbe9aa5ed71b9a3d97dc2b5ccad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20515bd60acb180cb02d22db9ef7b9556ca5b2747ae85d57b78afdc866987007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94eaf80152a93f49d2f1a1e0e90c908ff589a8333803e08fa1c1d2a13122d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.349790 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.368404 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.385209 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.388716 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.388767 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.388784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.388806 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.388823 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:13Z","lastTransitionTime":"2026-02-17T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.401380 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.417901 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.438214 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.459961 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.483935 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.491926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.491980 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.491997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.492020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.492037 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:13Z","lastTransitionTime":"2026-02-17T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.507652 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:42:07Z\\\",\\\"message\\\":\\\"org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.161\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 08:42:06.979444 6860 services_controller.go:452] Built service openshift-config-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0217 08:42:06.979452 6860 services_controller.go:453] Built service openshift-config-operator/metrics template LB for network=default: []services.LB{}\\\\nI0217 08:42:06.978247 6860 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0217 08:42:06.979460 6860 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:42:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:13Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.595380 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.595449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.595472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.595500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.595522 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:13Z","lastTransitionTime":"2026-02-17T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.699222 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.699278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.699294 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.699348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.699376 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:13Z","lastTransitionTime":"2026-02-17T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.802138 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.802208 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.802224 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.802251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.802270 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:13Z","lastTransitionTime":"2026-02-17T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.905494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.905580 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.905597 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.905630 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:13 crc kubenswrapper[4813]: I0217 08:42:13.905653 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:13Z","lastTransitionTime":"2026-02-17T08:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.008976 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.009045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.009065 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.009093 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.009114 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:14Z","lastTransitionTime":"2026-02-17T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.087494 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 22:45:27.739092559 +0000 UTC Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.110453 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.110457 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:14 crc kubenswrapper[4813]: E0217 08:42:14.110686 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:14 crc kubenswrapper[4813]: E0217 08:42:14.110918 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.112627 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.112680 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.112698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.112727 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.112751 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:14Z","lastTransitionTime":"2026-02-17T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.215668 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.215739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.215753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.215771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.215786 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:14Z","lastTransitionTime":"2026-02-17T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.318261 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.318352 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.318370 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.318394 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.318410 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:14Z","lastTransitionTime":"2026-02-17T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.421815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.422161 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.422355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.422511 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.422642 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:14Z","lastTransitionTime":"2026-02-17T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.525711 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.525776 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.525792 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.525820 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.525837 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:14Z","lastTransitionTime":"2026-02-17T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.629137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.629582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.629727 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.629873 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.630028 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:14Z","lastTransitionTime":"2026-02-17T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.733508 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.733587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.733608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.733632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.733652 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:14Z","lastTransitionTime":"2026-02-17T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.837006 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.837072 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.837105 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.837136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.837158 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:14Z","lastTransitionTime":"2026-02-17T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.940449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.940502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.940520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.940543 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:14 crc kubenswrapper[4813]: I0217 08:42:14.940559 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:14Z","lastTransitionTime":"2026-02-17T08:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.043761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.043851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.043873 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.043905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.043926 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:15Z","lastTransitionTime":"2026-02-17T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.088717 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:32:33.971919137 +0000 UTC Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.111149 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.111337 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:15 crc kubenswrapper[4813]: E0217 08:42:15.111859 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:15 crc kubenswrapper[4813]: E0217 08:42:15.111868 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.145954 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.146019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.146042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.146070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.146091 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:15Z","lastTransitionTime":"2026-02-17T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.248804 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.248877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.248898 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.248925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.248942 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:15Z","lastTransitionTime":"2026-02-17T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.352258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.352358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.352388 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.352418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.352443 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:15Z","lastTransitionTime":"2026-02-17T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.455334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.455719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.455884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.456038 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.456186 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:15Z","lastTransitionTime":"2026-02-17T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.559722 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.560217 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.560387 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.560530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.560710 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:15Z","lastTransitionTime":"2026-02-17T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.664113 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.664523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.664669 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.664827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.664981 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:15Z","lastTransitionTime":"2026-02-17T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.768375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.768445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.768463 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.768487 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.768505 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:15Z","lastTransitionTime":"2026-02-17T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.871743 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.871798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.871816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.871840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.871856 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:15Z","lastTransitionTime":"2026-02-17T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.974675 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.974756 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.974780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.974808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:15 crc kubenswrapper[4813]: I0217 08:42:15.974826 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:15Z","lastTransitionTime":"2026-02-17T08:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.077808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.077866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.077886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.077909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.077927 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:16Z","lastTransitionTime":"2026-02-17T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.089195 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 13:10:00.178445904 +0000 UTC Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.110613 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.110705 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:16 crc kubenswrapper[4813]: E0217 08:42:16.110784 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:16 crc kubenswrapper[4813]: E0217 08:42:16.110871 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.180658 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.180737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.180771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.180798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.180815 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:16Z","lastTransitionTime":"2026-02-17T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.283532 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.283601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.283620 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.283645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.283661 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:16Z","lastTransitionTime":"2026-02-17T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.386403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.386456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.386473 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.386496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.386518 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:16Z","lastTransitionTime":"2026-02-17T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.489926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.489973 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.489988 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.490014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.490031 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:16Z","lastTransitionTime":"2026-02-17T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.593209 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.593288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.593305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.593356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.593382 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:16Z","lastTransitionTime":"2026-02-17T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.697480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.697578 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.697604 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.697684 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.697708 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:16Z","lastTransitionTime":"2026-02-17T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.801195 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.801643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.801791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.801939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.802119 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:16Z","lastTransitionTime":"2026-02-17T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.905634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.906092 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.906427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.906652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.906874 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:16Z","lastTransitionTime":"2026-02-17T08:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.936845 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.937098 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:16 crc kubenswrapper[4813]: I0217 08:42:16.937219 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:16 crc kubenswrapper[4813]: E0217 08:42:16.937431 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 08:42:16 crc kubenswrapper[4813]: E0217 08:42:16.937537 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 08:43:20.937508096 +0000 UTC m=+148.598269359 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 08:42:16 crc kubenswrapper[4813]: E0217 08:42:16.937596 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:20.937552547 +0000 UTC m=+148.598313810 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:42:16 crc kubenswrapper[4813]: E0217 08:42:16.937649 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 08:42:16 crc kubenswrapper[4813]: E0217 08:42:16.937804 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 08:43:20.937766083 +0000 UTC m=+148.598527376 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.009609 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.009667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.009687 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.009714 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.009731 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:17Z","lastTransitionTime":"2026-02-17T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.037850 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.037937 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:17 crc kubenswrapper[4813]: E0217 08:42:17.038127 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 08:42:17 crc kubenswrapper[4813]: E0217 08:42:17.038161 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 08:42:17 crc kubenswrapper[4813]: E0217 08:42:17.038175 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 08:42:17 crc kubenswrapper[4813]: E0217 08:42:17.038186 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:42:17 crc kubenswrapper[4813]: E0217 08:42:17.038236 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 08:42:17 crc kubenswrapper[4813]: E0217 08:42:17.038256 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:42:17 crc kubenswrapper[4813]: E0217 08:42:17.038289 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 08:43:21.038263583 +0000 UTC m=+148.699024836 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:42:17 crc kubenswrapper[4813]: E0217 08:42:17.038405 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 08:43:21.038366545 +0000 UTC m=+148.699127798 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.089361 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:24:35.782383416 +0000 UTC Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.110911 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:17 crc kubenswrapper[4813]: E0217 08:42:17.111085 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.111145 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:17 crc kubenswrapper[4813]: E0217 08:42:17.111369 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.112847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.112899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.112917 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.112941 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.112958 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:17Z","lastTransitionTime":"2026-02-17T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.215781 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.215828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.215844 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.215867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.215884 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:17Z","lastTransitionTime":"2026-02-17T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.323726 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.323801 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.323839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.323872 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.323892 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:17Z","lastTransitionTime":"2026-02-17T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.427036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.427080 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.427096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.427119 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.427137 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:17Z","lastTransitionTime":"2026-02-17T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.530275 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.530397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.530421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.530450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.530469 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:17Z","lastTransitionTime":"2026-02-17T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.633467 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.633532 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.633557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.633587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.633607 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:17Z","lastTransitionTime":"2026-02-17T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.736145 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.736194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.736211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.736233 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.736251 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:17Z","lastTransitionTime":"2026-02-17T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.839669 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.839796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.839821 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.839854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.839876 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:17Z","lastTransitionTime":"2026-02-17T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.943861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.943961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.943981 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.944009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:17 crc kubenswrapper[4813]: I0217 08:42:17.944028 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:17Z","lastTransitionTime":"2026-02-17T08:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.048365 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.048428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.048451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.048484 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.048508 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:18Z","lastTransitionTime":"2026-02-17T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.090392 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:03:32.413337604 +0000 UTC Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.110803 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.110862 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:18 crc kubenswrapper[4813]: E0217 08:42:18.111014 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:18 crc kubenswrapper[4813]: E0217 08:42:18.111217 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.150936 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.150985 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.151002 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.151023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.151039 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:18Z","lastTransitionTime":"2026-02-17T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.253882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.253955 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.253971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.253993 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.254009 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:18Z","lastTransitionTime":"2026-02-17T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.356745 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.356825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.356848 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.356876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.356898 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:18Z","lastTransitionTime":"2026-02-17T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.459705 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.459777 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.459800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.459825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.459841 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:18Z","lastTransitionTime":"2026-02-17T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.563176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.563229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.563245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.563267 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.563425 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:18Z","lastTransitionTime":"2026-02-17T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.666495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.666552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.666570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.666594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.666611 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:18Z","lastTransitionTime":"2026-02-17T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.769836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.769893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.769909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.769931 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.769951 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:18Z","lastTransitionTime":"2026-02-17T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.872879 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.872940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.872956 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.872980 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.872999 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:18Z","lastTransitionTime":"2026-02-17T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.976971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.977036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.977057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.977084 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:18 crc kubenswrapper[4813]: I0217 08:42:18.977107 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:18Z","lastTransitionTime":"2026-02-17T08:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.079741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.080161 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.080182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.080207 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.080230 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:19Z","lastTransitionTime":"2026-02-17T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.090964 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 07:07:11.560574169 +0000 UTC Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.110558 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.110638 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:19 crc kubenswrapper[4813]: E0217 08:42:19.110815 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:19 crc kubenswrapper[4813]: E0217 08:42:19.111070 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.183200 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.183251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.183264 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.183281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.183293 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:19Z","lastTransitionTime":"2026-02-17T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.286746 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.286811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.286828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.286853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.286872 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:19Z","lastTransitionTime":"2026-02-17T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.389656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.389720 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.389740 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.389763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.389785 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:19Z","lastTransitionTime":"2026-02-17T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.492535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.492584 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.492595 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.492638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.492651 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:19Z","lastTransitionTime":"2026-02-17T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.595279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.595387 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.595405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.595433 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.595452 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:19Z","lastTransitionTime":"2026-02-17T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.699498 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.699557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.699572 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.699594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.699608 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:19Z","lastTransitionTime":"2026-02-17T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.802634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.802699 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.802716 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.802739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.802756 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:19Z","lastTransitionTime":"2026-02-17T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.905696 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.905730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.905740 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.905754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:19 crc kubenswrapper[4813]: I0217 08:42:19.905765 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:19Z","lastTransitionTime":"2026-02-17T08:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.008411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.008457 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.008470 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.008490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.008504 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:20Z","lastTransitionTime":"2026-02-17T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.091741 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:24:54.485662857 +0000 UTC Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.111261 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.111294 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.111359 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.111377 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.111398 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.111415 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.111415 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:20Z","lastTransitionTime":"2026-02-17T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:20 crc kubenswrapper[4813]: E0217 08:42:20.111545 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:20 crc kubenswrapper[4813]: E0217 08:42:20.111857 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.112831 4813 scope.go:117] "RemoveContainer" containerID="fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532" Feb 17 08:42:20 crc kubenswrapper[4813]: E0217 08:42:20.113065 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.214816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.214876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.214893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.214916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.214935 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:20Z","lastTransitionTime":"2026-02-17T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.318021 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.318085 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.318101 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.318125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.318142 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:20Z","lastTransitionTime":"2026-02-17T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.421130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.421193 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.421210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.421237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.421263 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:20Z","lastTransitionTime":"2026-02-17T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.525225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.525286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.525303 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.525356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.525374 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:20Z","lastTransitionTime":"2026-02-17T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.605704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.605764 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.605783 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.605805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.605822 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:20Z","lastTransitionTime":"2026-02-17T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:20 crc kubenswrapper[4813]: E0217 08:42:20.628094 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.633866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.633938 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.633958 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.633987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.634006 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:20Z","lastTransitionTime":"2026-02-17T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:20 crc kubenswrapper[4813]: E0217 08:42:20.655994 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.660980 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.661044 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.661062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.661088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.661108 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:20Z","lastTransitionTime":"2026-02-17T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:20 crc kubenswrapper[4813]: E0217 08:42:20.683173 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.688667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.688779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.688801 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.688829 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.688852 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:20Z","lastTransitionTime":"2026-02-17T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:20 crc kubenswrapper[4813]: E0217 08:42:20.709717 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.714734 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.714798 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.714816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.714843 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.714861 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:20Z","lastTransitionTime":"2026-02-17T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:20 crc kubenswrapper[4813]: E0217 08:42:20.734990 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"638419d1-5faa-4f84-9c92-7db1de46de03\\\",\\\"systemUUID\\\":\\\"2e490fc5-8f26-428d-b89b-fef6c7566c17\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:20Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:20 crc kubenswrapper[4813]: E0217 08:42:20.735252 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.737434 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.737498 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.737515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.737540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.737556 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:20Z","lastTransitionTime":"2026-02-17T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.840259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.840296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.840335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.840355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.840367 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:20Z","lastTransitionTime":"2026-02-17T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.943805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.943878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.943901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.943925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:20 crc kubenswrapper[4813]: I0217 08:42:20.943943 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:20Z","lastTransitionTime":"2026-02-17T08:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.046832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.046896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.046913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.046938 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.046959 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:21Z","lastTransitionTime":"2026-02-17T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.092854 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 14:24:14.356313923 +0000 UTC Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.110532 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.110560 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:21 crc kubenswrapper[4813]: E0217 08:42:21.110787 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:21 crc kubenswrapper[4813]: E0217 08:42:21.110935 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.149855 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.149931 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.149954 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.149986 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.150011 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:21Z","lastTransitionTime":"2026-02-17T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.253374 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.253425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.253442 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.253466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.253485 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:21Z","lastTransitionTime":"2026-02-17T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.356336 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.356375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.356385 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.356400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.356409 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:21Z","lastTransitionTime":"2026-02-17T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.458052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.458093 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.458104 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.458120 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.458132 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:21Z","lastTransitionTime":"2026-02-17T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.561192 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.561245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.561259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.561277 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.561289 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:21Z","lastTransitionTime":"2026-02-17T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.663819 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.663882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.663899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.663923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.663941 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:21Z","lastTransitionTime":"2026-02-17T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.766720 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.766776 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.766793 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.766818 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.766835 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:21Z","lastTransitionTime":"2026-02-17T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.869505 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.869570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.869591 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.869615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.869632 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:21Z","lastTransitionTime":"2026-02-17T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.972680 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.972742 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.972759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.972783 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:21 crc kubenswrapper[4813]: I0217 08:42:21.972800 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:21Z","lastTransitionTime":"2026-02-17T08:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.075959 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.076024 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.076047 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.076077 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.076096 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:22Z","lastTransitionTime":"2026-02-17T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.093546 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:08:20.31237441 +0000 UTC Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.110391 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.110398 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:22 crc kubenswrapper[4813]: E0217 08:42:22.110643 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:22 crc kubenswrapper[4813]: E0217 08:42:22.110775 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.178632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.178693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.178716 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.178744 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.178766 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:22Z","lastTransitionTime":"2026-02-17T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.280792 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.280831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.280843 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.280860 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.280871 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:22Z","lastTransitionTime":"2026-02-17T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.384187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.384227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.384238 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.384252 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.384262 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:22Z","lastTransitionTime":"2026-02-17T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.487724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.487787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.487811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.487842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.487864 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:22Z","lastTransitionTime":"2026-02-17T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.594854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.594939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.594964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.594992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.595025 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:22Z","lastTransitionTime":"2026-02-17T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.698568 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.698623 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.698641 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.698662 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.698679 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:22Z","lastTransitionTime":"2026-02-17T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.801632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.801752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.801769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.801796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.801812 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:22Z","lastTransitionTime":"2026-02-17T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.904391 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.904449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.904470 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.904501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:22 crc kubenswrapper[4813]: I0217 08:42:22.904561 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:22Z","lastTransitionTime":"2026-02-17T08:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.007758 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.007822 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.007845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.007874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.007896 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:23Z","lastTransitionTime":"2026-02-17T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.093658 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:54:45.315528551 +0000 UTC Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.110460 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:23 crc kubenswrapper[4813]: E0217 08:42:23.110659 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.111370 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.111385 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.111415 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.111431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.111453 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.111470 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:23Z","lastTransitionTime":"2026-02-17T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:23 crc kubenswrapper[4813]: E0217 08:42:23.111566 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.132189 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.151131 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a6ba827-b08b-4163-b067-d9adb119398d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad866eeef290ccfbe44c29fd55b58e02720e61108e6b98da44f9149b443901a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cw4g6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-w2pz7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.174195 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fdf09f7-638a-4436-ad1d-f8afe2855536\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T08:41:12Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 08:40:56.776456 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 08:40:56.780103 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1233163282/tls.crt::/tmp/serving-cert-1233163282/tls.key\\\\\\\"\\\\nI0217 08:41:12.258756 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 08:41:12.263984 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 08:41:12.264032 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 08:41:12.264069 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 08:41:12.264084 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 08:41:12.273911 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0217 08:41:12.273962 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 08:41:12.273970 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274049 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 08:41:12.274066 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 08:41:12.274075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 08:41:12.274083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 08:41:12.274104 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 08:41:12.276532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.191441 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qlb2w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d02a7de9-7ac4-40c0-908d-dd8036e26724\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0620367f43cc2f56a57622c3f0098073dfe34692767e31d7360f082500480436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zr5mb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qlb2w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.214101 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.214197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.214215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.214265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.214284 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:23Z","lastTransitionTime":"2026-02-17T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.215575 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b909000-c40e-4ffd-b174-425ab3c9fe6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14991875ed3ab41d4a9ad5ea386e493b11f3f351828b26746bdd0b9482a0e0e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98adb0db376e5bf548856278753c3e4e6cde88b06fce63a2d4a3756f0e6ca3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1029bd38976afae2735e563c93f595676bacd5e357507cb987b2e796a0a00fbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4022a62a3a2c38762b092b37cb9e414e639ca49c2f5094f5ea9c635cdc36f65\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60e234105160ea707dfdca27d732b0a340f9243aba51a16e647db120fa68f18d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c575e24829debe3877219b0102509920d62d950be22ac3ce4c5918d176886df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://254ae1f67b3ebd4202f40ca521bce6809835168560331485193d8ac353e508fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dj6ch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckxzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.251246 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3513e95a-8ab1-42f1-8aa5-37400db92720\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:42:07Z\\\",\\\"message\\\":\\\"org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.161\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 08:42:06.979444 6860 services_controller.go:452] Built service openshift-config-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0217 08:42:06.979452 6860 services_controller.go:453] Built service openshift-config-operator/metrics template LB for network=default: []services.LB{}\\\\nI0217 08:42:06.978247 6860 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nF0217 08:42:06.979460 6860 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:42:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z8p8t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qsj6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.270164 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78e46e46-69da-4a35-ab3c-241f09064fe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fb917842d6a74d8940bc31e8e63442566ea7b533c50215c41d45b881c545c65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94ec1b8ec80ce0d28e5b7d5c9e30444a5bb27372e278d62fa4413afd470e121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9zr2l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvvlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.286558 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7656230f-21ef-4fcf-8ead-2329a2a4be2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ddb252c2ed9e53e9c1fffa1b4ff0929f35762930bc4d83d6fe9659653ed5230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0dc468d3eb11edef4d42a251ec24586ef054de98cf485bcecf203442543a9b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.318254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.318390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.318410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.318596 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.318619 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:23Z","lastTransitionTime":"2026-02-17T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.320424 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a5860d9-2e84-405b-afac-71b97839c2f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6847afe715436a67d3b3af81e71e76dd677dfb136e7e3069bfa1139bf1fea0b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a104b9bfeed02d7026fefaf2ebc0d06e41338a49e7b7dcf8d2af8a8d1b09c78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e1c9e0053f25a5024d8e7c847dbb8d41d4dbb5a190b68ab886940c234f1bd25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9b30c7dcbe1845bca18bd5ed7d6f17b3d9104671865ab1104f95797efd5768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd8aa7582c5570e00048d7d2c5a9326b432b3c8f1d2496ba13b02bcae236621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f01cc63ecc9b7adae26cc9dcdfbed97408c19e7ab4817aa395889183a421393f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37a9e17f49b1d9c465740e1f6140aed9673ff87b369919ea562ed907a008330d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78b1a1c6ac045cc0204c6c6c2e4c75cb7b59d8fc391a5cec02bc763216fb0c0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.341222 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7c6b52aade589c4c3ac43df34ac24f188bfe4e21faade4a9df6707dcccbb2c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.361985 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df7c90639a8845362441e44c5659ee922baf33ba21c26003a1ce2c60861f01be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c33a80dbf50961c72ecc6245b6cb8cfeb200db28ef0f7333184a401e51cbdace\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.382804 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-swpdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05624bdef750b78c738592304644643c1215e4d7441442c786ced24049d9ce40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T08:42:01Z\\\",\\\"message\\\":\\\"2026-02-17T08:41:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_79ca2e6e-cdb4-4f2c-b675-ba50311bc499\\\\n2026-02-17T08:41:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_79ca2e6e-cdb4-4f2c-b675-ba50311bc499 to /host/opt/cni/bin/\\\\n2026-02-17T08:41:16Z [verbose] multus-daemon started\\\\n2026-02-17T08:41:16Z [verbose] Readiness Indicator file check\\\\n2026-02-17T08:42:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T08:41:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-swpdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.398542 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b8a9309-df0c-4bc5-bd41-3a54a5cd834d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ecb0f7ee8abfcb2b38608f80c391a93ebf9e57eaaf047982f5c69c110ce2dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rxcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.414693 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-srrq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b42b143b-e85b-44cc-a427-ba1ebd82c55b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mwdld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:41:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-srrq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.421025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.421133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.421158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.421187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.421210 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:23Z","lastTransitionTime":"2026-02-17T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.434022 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dea3166b-1134-4b3d-984f-38a6bf8f5ee0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23340ff0cd5895d3a47f027c00f470fa694b16562f0f8e615b55933328e38a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edcd993cb67f46f9fc1df340ee7575555f0f9d03ac169fb25bc8fbe22b3851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c95923f3ec1ff34288037f6fffc2b8badcc31ad6539c607e84f6bb8911d2e61\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.452283 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d656c08-9eaf-4e82-85a6-e55c5db21ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T08:40:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7afd399e353b112d2e6f6cce0e4a33a6a441dbe9aa5ed71b9a3d97dc2b5ccad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://20515bd60acb180cb02d22db9ef7b9556ca5b2747ae85d57b78afdc866987007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e94eaf80152a93f49d2f1a1e0e90c908ff589a8333803e08fa1c1d2a13122d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:40:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53dd90eb31d741636ca2b95891385c2637031a7beb17167282b0e67ddee6e286\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T08:40:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T08:40:54Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T08:40:53Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.474701 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.497022 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.517743 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T08:41:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2c9c404e7e8e213ffd50e662aac234b3768a72d15167643eb929f0c85c1e97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T08:41:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T08:42:23Z is after 2025-08-24T17:21:41Z" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.524265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.524342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.524366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.524397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.524419 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:23Z","lastTransitionTime":"2026-02-17T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.626728 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.626802 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.626899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.626932 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.626951 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:23Z","lastTransitionTime":"2026-02-17T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.730410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.730477 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.730497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.730521 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.730538 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:23Z","lastTransitionTime":"2026-02-17T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.833831 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.833910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.833933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.833963 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.833990 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:23Z","lastTransitionTime":"2026-02-17T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.936914 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.936985 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.937009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.937036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:23 crc kubenswrapper[4813]: I0217 08:42:23.937054 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:23Z","lastTransitionTime":"2026-02-17T08:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.040524 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.040574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.040584 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.040615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.040624 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:24Z","lastTransitionTime":"2026-02-17T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.094541 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 05:42:23.979191204 +0000 UTC Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.110990 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.111082 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:24 crc kubenswrapper[4813]: E0217 08:42:24.111200 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:24 crc kubenswrapper[4813]: E0217 08:42:24.111347 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.144013 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.144079 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.144102 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.144131 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.144153 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:24Z","lastTransitionTime":"2026-02-17T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.247347 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.247448 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.247469 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.247495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.247513 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:24Z","lastTransitionTime":"2026-02-17T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.351030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.351096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.351117 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.351149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.351171 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:24Z","lastTransitionTime":"2026-02-17T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.455224 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.455292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.455344 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.455369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.455387 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:24Z","lastTransitionTime":"2026-02-17T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.558791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.558920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.558944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.558966 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.558981 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:24Z","lastTransitionTime":"2026-02-17T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.662505 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.662567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.662653 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.662739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.662770 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:24Z","lastTransitionTime":"2026-02-17T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.765854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.765926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.765944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.765967 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.765984 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:24Z","lastTransitionTime":"2026-02-17T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.868915 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.869002 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.869018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.869043 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.869065 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:24Z","lastTransitionTime":"2026-02-17T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.972220 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.972278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.972296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.972350 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:24 crc kubenswrapper[4813]: I0217 08:42:24.972369 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:24Z","lastTransitionTime":"2026-02-17T08:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.075805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.075860 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.075879 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.075903 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.075920 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:25Z","lastTransitionTime":"2026-02-17T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.095154 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 07:31:47.21047084 +0000 UTC Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.110677 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.110831 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:25 crc kubenswrapper[4813]: E0217 08:42:25.111084 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:25 crc kubenswrapper[4813]: E0217 08:42:25.111259 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.178405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.178480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.178501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.178529 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.178550 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:25Z","lastTransitionTime":"2026-02-17T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.281644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.281690 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.281708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.281732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.281749 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:25Z","lastTransitionTime":"2026-02-17T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.384792 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.384851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.384870 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.384896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.384913 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:25Z","lastTransitionTime":"2026-02-17T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.488262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.488348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.488366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.488390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.488408 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:25Z","lastTransitionTime":"2026-02-17T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.591853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.591904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.591920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.591943 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.591960 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:25Z","lastTransitionTime":"2026-02-17T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.694067 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.694111 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.694131 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.694154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.694171 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:25Z","lastTransitionTime":"2026-02-17T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.797166 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.797226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.797248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.797373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.797397 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:25Z","lastTransitionTime":"2026-02-17T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.900562 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.900616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.900633 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.900657 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:25 crc kubenswrapper[4813]: I0217 08:42:25.900673 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:25Z","lastTransitionTime":"2026-02-17T08:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.003931 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.003994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.004014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.004038 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.004060 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:26Z","lastTransitionTime":"2026-02-17T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.095633 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 07:36:45.630059645 +0000 UTC Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.107225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.107294 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.107342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.107369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.107387 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:26Z","lastTransitionTime":"2026-02-17T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.110464 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.110474 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:26 crc kubenswrapper[4813]: E0217 08:42:26.110666 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:26 crc kubenswrapper[4813]: E0217 08:42:26.110780 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.211977 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.212033 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.212049 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.212073 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.212092 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:26Z","lastTransitionTime":"2026-02-17T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.315473 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.315530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.315551 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.315577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.315597 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:26Z","lastTransitionTime":"2026-02-17T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.427849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.427921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.427944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.427974 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.427997 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:26Z","lastTransitionTime":"2026-02-17T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.530950 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.531009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.531026 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.531049 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.531066 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:26Z","lastTransitionTime":"2026-02-17T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.633707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.633752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.633768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.633789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.633805 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:26Z","lastTransitionTime":"2026-02-17T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.736800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.736845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.736862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.736883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.736900 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:26Z","lastTransitionTime":"2026-02-17T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.839559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.839626 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.839642 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.839667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.839683 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:26Z","lastTransitionTime":"2026-02-17T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.942286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.942347 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.942376 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.942390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:26 crc kubenswrapper[4813]: I0217 08:42:26.942399 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:26Z","lastTransitionTime":"2026-02-17T08:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.044614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.044686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.044703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.044727 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.044744 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:27Z","lastTransitionTime":"2026-02-17T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.096063 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:08:06.663735798 +0000 UTC Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.110609 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.110819 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:27 crc kubenswrapper[4813]: E0217 08:42:27.111025 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:27 crc kubenswrapper[4813]: E0217 08:42:27.111380 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.147109 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.147157 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.147168 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.147187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.147199 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:27Z","lastTransitionTime":"2026-02-17T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.249766 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.249842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.249866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.249896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.249918 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:27Z","lastTransitionTime":"2026-02-17T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.352198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.352251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.352269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.352294 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.352335 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:27Z","lastTransitionTime":"2026-02-17T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.455253 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.455379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.455407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.455438 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.455464 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:27Z","lastTransitionTime":"2026-02-17T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.559350 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.559411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.559430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.559456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.559474 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:27Z","lastTransitionTime":"2026-02-17T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.662530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.662650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.662667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.662691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.662707 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:27Z","lastTransitionTime":"2026-02-17T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.765344 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.765402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.765420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.765449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.765471 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:27Z","lastTransitionTime":"2026-02-17T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.868702 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.868768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.868786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.868816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.868839 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:27Z","lastTransitionTime":"2026-02-17T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.972800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.972886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.972910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.972942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:27 crc kubenswrapper[4813]: I0217 08:42:27.972967 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:27Z","lastTransitionTime":"2026-02-17T08:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.076164 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.076223 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.076240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.076265 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.076302 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:28Z","lastTransitionTime":"2026-02-17T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.096631 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 05:26:10.443082766 +0000 UTC Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.111009 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.111048 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:28 crc kubenswrapper[4813]: E0217 08:42:28.111182 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:28 crc kubenswrapper[4813]: E0217 08:42:28.111398 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.179466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.179564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.179582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.179609 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.179626 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:28Z","lastTransitionTime":"2026-02-17T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.283441 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.283506 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.283525 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.283550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.283566 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:28Z","lastTransitionTime":"2026-02-17T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.386209 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.386281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.386301 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.386351 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.386369 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:28Z","lastTransitionTime":"2026-02-17T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.491699 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.491759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.491778 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.491807 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.491829 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:28Z","lastTransitionTime":"2026-02-17T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.595120 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.595194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.595211 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.595236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.595252 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:28Z","lastTransitionTime":"2026-02-17T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.698468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.698521 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.698542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.698566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.698583 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:28Z","lastTransitionTime":"2026-02-17T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.801484 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.801545 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.801564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.801591 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.801609 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:28Z","lastTransitionTime":"2026-02-17T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.905336 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.905397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.905417 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.905439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:28 crc kubenswrapper[4813]: I0217 08:42:28.905454 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:28Z","lastTransitionTime":"2026-02-17T08:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.007927 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.007983 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.007999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.008024 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.008041 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:29Z","lastTransitionTime":"2026-02-17T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.097140 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 15:40:25.820696437 +0000 UTC Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.109461 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.109492 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.109500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.109514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.109523 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:29Z","lastTransitionTime":"2026-02-17T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.110835 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:29 crc kubenswrapper[4813]: E0217 08:42:29.110918 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.110835 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:29 crc kubenswrapper[4813]: E0217 08:42:29.110976 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.211974 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.212009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.212023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.212040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.212052 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:29Z","lastTransitionTime":"2026-02-17T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.315295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.315395 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.315413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.315438 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.315454 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:29Z","lastTransitionTime":"2026-02-17T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.418396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.418483 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.418509 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.418541 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.418565 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:29Z","lastTransitionTime":"2026-02-17T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.521633 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.521664 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.521675 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.521691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.521701 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:29Z","lastTransitionTime":"2026-02-17T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.625131 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.625189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.625206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.625235 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.625252 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:29Z","lastTransitionTime":"2026-02-17T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.727958 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.728050 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.728068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.728094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.728113 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:29Z","lastTransitionTime":"2026-02-17T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.830593 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.830663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.830682 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.830710 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.830728 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:29Z","lastTransitionTime":"2026-02-17T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.933300 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.933390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.933407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.933435 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:29 crc kubenswrapper[4813]: I0217 08:42:29.933453 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:29Z","lastTransitionTime":"2026-02-17T08:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.036106 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.036169 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.036189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.036215 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.036234 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:30Z","lastTransitionTime":"2026-02-17T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.098032 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:54:48.211931679 +0000 UTC Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.110440 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.110460 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:30 crc kubenswrapper[4813]: E0217 08:42:30.110719 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:30 crc kubenswrapper[4813]: E0217 08:42:30.110785 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.138969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.139032 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.139050 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.139071 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.139088 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:30Z","lastTransitionTime":"2026-02-17T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.242907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.242970 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.242990 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.243016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.243033 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:30Z","lastTransitionTime":"2026-02-17T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.345786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.345842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.345859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.345883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.345900 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:30Z","lastTransitionTime":"2026-02-17T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.449005 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.449081 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.449106 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.449135 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.449156 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:30Z","lastTransitionTime":"2026-02-17T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.552181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.552240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.552258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.552279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.552297 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:30Z","lastTransitionTime":"2026-02-17T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.655723 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.655778 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.655796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.655822 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.655839 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:30Z","lastTransitionTime":"2026-02-17T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.758656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.758693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.758704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.758719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.758731 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:30Z","lastTransitionTime":"2026-02-17T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.766052 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.766112 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.766129 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.766153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.766170 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T08:42:30Z","lastTransitionTime":"2026-02-17T08:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.835691 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw"] Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.836787 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.841102 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.841709 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.841801 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.842673 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.874821 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-swpdn" podStartSLOduration=77.874791719 podStartE2EDuration="1m17.874791719s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:42:30.874692276 +0000 UTC m=+98.535453569" watchObservedRunningTime="2026-02-17 08:42:30.874791719 +0000 UTC m=+98.535552972" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.882864 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/48505ca0-2b00-4488-b92b-697c26cf3a8f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mm9sw\" (UID: \"48505ca0-2b00-4488-b92b-697c26cf3a8f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.882933 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/48505ca0-2b00-4488-b92b-697c26cf3a8f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mm9sw\" (UID: \"48505ca0-2b00-4488-b92b-697c26cf3a8f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.882971 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/48505ca0-2b00-4488-b92b-697c26cf3a8f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mm9sw\" (UID: \"48505ca0-2b00-4488-b92b-697c26cf3a8f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.883034 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48505ca0-2b00-4488-b92b-697c26cf3a8f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mm9sw\" (UID: \"48505ca0-2b00-4488-b92b-697c26cf3a8f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.883097 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48505ca0-2b00-4488-b92b-697c26cf3a8f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mm9sw\" (UID: \"48505ca0-2b00-4488-b92b-697c26cf3a8f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.891517 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qdj4m" podStartSLOduration=77.891498168 podStartE2EDuration="1m17.891498168s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:42:30.891302582 +0000 UTC m=+98.552063845" watchObservedRunningTime="2026-02-17 08:42:30.891498168 +0000 UTC m=+98.552259431" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.962605 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=75.96258107 podStartE2EDuration="1m15.96258107s" podCreationTimestamp="2026-02-17 08:41:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:42:30.961760397 +0000 UTC m=+98.622521650" watchObservedRunningTime="2026-02-17 08:42:30.96258107 +0000 UTC m=+98.623342333" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.962939 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=30.962929129 podStartE2EDuration="30.962929129s" podCreationTimestamp="2026-02-17 08:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:42:30.92144738 +0000 UTC m=+98.582208643" watchObservedRunningTime="2026-02-17 08:42:30.962929129 +0000 UTC m=+98.623690382" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.983972 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/48505ca0-2b00-4488-b92b-697c26cf3a8f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mm9sw\" (UID: \"48505ca0-2b00-4488-b92b-697c26cf3a8f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.984043 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/48505ca0-2b00-4488-b92b-697c26cf3a8f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mm9sw\" (UID: \"48505ca0-2b00-4488-b92b-697c26cf3a8f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.984079 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/48505ca0-2b00-4488-b92b-697c26cf3a8f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mm9sw\" (UID: \"48505ca0-2b00-4488-b92b-697c26cf3a8f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.984135 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48505ca0-2b00-4488-b92b-697c26cf3a8f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mm9sw\" (UID: \"48505ca0-2b00-4488-b92b-697c26cf3a8f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.984199 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48505ca0-2b00-4488-b92b-697c26cf3a8f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mm9sw\" (UID: \"48505ca0-2b00-4488-b92b-697c26cf3a8f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.984955 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/48505ca0-2b00-4488-b92b-697c26cf3a8f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mm9sw\" (UID: \"48505ca0-2b00-4488-b92b-697c26cf3a8f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.985389 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/48505ca0-2b00-4488-b92b-697c26cf3a8f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mm9sw\" (UID: \"48505ca0-2b00-4488-b92b-697c26cf3a8f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.986505 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/48505ca0-2b00-4488-b92b-697c26cf3a8f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mm9sw\" (UID: \"48505ca0-2b00-4488-b92b-697c26cf3a8f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:30 crc kubenswrapper[4813]: I0217 08:42:30.992565 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48505ca0-2b00-4488-b92b-697c26cf3a8f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mm9sw\" (UID: \"48505ca0-2b00-4488-b92b-697c26cf3a8f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.020152 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48505ca0-2b00-4488-b92b-697c26cf3a8f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mm9sw\" (UID: \"48505ca0-2b00-4488-b92b-697c26cf3a8f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.079621 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=73.079600983 podStartE2EDuration="1m13.079600983s" podCreationTimestamp="2026-02-17 08:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:42:31.079427948 +0000 UTC m=+98.740189231" watchObservedRunningTime="2026-02-17 08:42:31.079600983 +0000 UTC m=+98.740362206" Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.095651 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=46.095624203 podStartE2EDuration="46.095624203s" podCreationTimestamp="2026-02-17 08:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:42:31.094988445 +0000 UTC m=+98.755749698" watchObservedRunningTime="2026-02-17 08:42:31.095624203 +0000 UTC m=+98.756385466" Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.098282 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 00:00:33.901245861 +0000 UTC Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.098391 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.106050 4813 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.110845 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.110922 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:31 crc kubenswrapper[4813]: E0217 08:42:31.111008 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:31 crc kubenswrapper[4813]: E0217 08:42:31.111085 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.155897 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podStartSLOduration=78.155865407 podStartE2EDuration="1m18.155865407s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:42:31.155701682 +0000 UTC m=+98.816462925" watchObservedRunningTime="2026-02-17 08:42:31.155865407 +0000 UTC m=+98.816626660" Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.158989 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.194286 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvvlc" podStartSLOduration=77.194259831 podStartE2EDuration="1m17.194259831s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:42:31.170747286 +0000 UTC m=+98.831508509" watchObservedRunningTime="2026-02-17 08:42:31.194259831 +0000 UTC m=+98.855021094" Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.209686 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-qlb2w" podStartSLOduration=78.209661074 podStartE2EDuration="1m18.209661074s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:42:31.209555581 +0000 UTC m=+98.870316814" watchObservedRunningTime="2026-02-17 08:42:31.209661074 +0000 UTC m=+98.870422307" Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.210237 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.21022807 podStartE2EDuration="1m18.21022807s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:42:31.199658579 +0000 UTC m=+98.860419822" watchObservedRunningTime="2026-02-17 08:42:31.21022807 +0000 UTC m=+98.870989303" Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.228338 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ckxzc" podStartSLOduration=78.228296076 podStartE2EDuration="1m18.228296076s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:42:31.227639798 +0000 UTC m=+98.888401031" watchObservedRunningTime="2026-02-17 08:42:31.228296076 +0000 UTC m=+98.889057309" Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.701544 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" event={"ID":"48505ca0-2b00-4488-b92b-697c26cf3a8f","Type":"ContainerStarted","Data":"ab1e1fc5f06018af6259175119e03fa1f199989e9438249f77df5bc1ce76d546"} Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.702010 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" event={"ID":"48505ca0-2b00-4488-b92b-697c26cf3a8f","Type":"ContainerStarted","Data":"541b5751acc4954cf59a09b9bc592043ac8f191550375ea7e149bef1de54ef87"} Feb 17 08:42:31 crc kubenswrapper[4813]: I0217 08:42:31.720367 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm9sw" podStartSLOduration=78.720340537 podStartE2EDuration="1m18.720340537s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:42:31.718774294 +0000 UTC m=+99.379535557" watchObservedRunningTime="2026-02-17 08:42:31.720340537 +0000 UTC m=+99.381101790" Feb 17 08:42:32 crc kubenswrapper[4813]: I0217 08:42:32.110913 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:32 crc kubenswrapper[4813]: I0217 08:42:32.110932 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:32 crc kubenswrapper[4813]: E0217 08:42:32.111097 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:32 crc kubenswrapper[4813]: E0217 08:42:32.111257 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:32 crc kubenswrapper[4813]: I0217 08:42:32.294406 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs\") pod \"network-metrics-daemon-srrq7\" (UID: \"b42b143b-e85b-44cc-a427-ba1ebd82c55b\") " pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:32 crc kubenswrapper[4813]: E0217 08:42:32.294592 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 08:42:32 crc kubenswrapper[4813]: E0217 08:42:32.294675 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs podName:b42b143b-e85b-44cc-a427-ba1ebd82c55b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:36.294652397 +0000 UTC m=+163.955413630 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs") pod "network-metrics-daemon-srrq7" (UID: "b42b143b-e85b-44cc-a427-ba1ebd82c55b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 08:42:33 crc kubenswrapper[4813]: I0217 08:42:33.110933 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:33 crc kubenswrapper[4813]: I0217 08:42:33.111035 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:33 crc kubenswrapper[4813]: E0217 08:42:33.112869 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:33 crc kubenswrapper[4813]: E0217 08:42:33.113447 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:33 crc kubenswrapper[4813]: I0217 08:42:33.113853 4813 scope.go:117] "RemoveContainer" containerID="fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532" Feb 17 08:42:33 crc kubenswrapper[4813]: E0217 08:42:33.114034 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" Feb 17 08:42:34 crc kubenswrapper[4813]: I0217 08:42:34.110553 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:34 crc kubenswrapper[4813]: E0217 08:42:34.110684 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:34 crc kubenswrapper[4813]: I0217 08:42:34.110557 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:34 crc kubenswrapper[4813]: E0217 08:42:34.110933 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:35 crc kubenswrapper[4813]: I0217 08:42:35.111031 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:35 crc kubenswrapper[4813]: I0217 08:42:35.111204 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:35 crc kubenswrapper[4813]: E0217 08:42:35.111534 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:35 crc kubenswrapper[4813]: E0217 08:42:35.111736 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:36 crc kubenswrapper[4813]: I0217 08:42:36.110452 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:36 crc kubenswrapper[4813]: E0217 08:42:36.110595 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:36 crc kubenswrapper[4813]: I0217 08:42:36.110767 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:36 crc kubenswrapper[4813]: E0217 08:42:36.110999 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:37 crc kubenswrapper[4813]: I0217 08:42:37.110624 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:37 crc kubenswrapper[4813]: E0217 08:42:37.110824 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:37 crc kubenswrapper[4813]: I0217 08:42:37.110893 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:37 crc kubenswrapper[4813]: E0217 08:42:37.111296 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:38 crc kubenswrapper[4813]: I0217 08:42:38.111102 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:38 crc kubenswrapper[4813]: I0217 08:42:38.111158 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:38 crc kubenswrapper[4813]: E0217 08:42:38.111266 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:38 crc kubenswrapper[4813]: E0217 08:42:38.111459 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:39 crc kubenswrapper[4813]: I0217 08:42:39.110664 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:39 crc kubenswrapper[4813]: I0217 08:42:39.110726 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:39 crc kubenswrapper[4813]: E0217 08:42:39.111022 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:39 crc kubenswrapper[4813]: E0217 08:42:39.111165 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:40 crc kubenswrapper[4813]: I0217 08:42:40.110906 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:40 crc kubenswrapper[4813]: I0217 08:42:40.110923 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:40 crc kubenswrapper[4813]: E0217 08:42:40.111078 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:40 crc kubenswrapper[4813]: E0217 08:42:40.111214 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:41 crc kubenswrapper[4813]: I0217 08:42:41.110588 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:41 crc kubenswrapper[4813]: I0217 08:42:41.110600 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:41 crc kubenswrapper[4813]: E0217 08:42:41.110786 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:41 crc kubenswrapper[4813]: E0217 08:42:41.110879 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:42 crc kubenswrapper[4813]: I0217 08:42:42.110649 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:42 crc kubenswrapper[4813]: I0217 08:42:42.110720 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:42 crc kubenswrapper[4813]: E0217 08:42:42.111861 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:42 crc kubenswrapper[4813]: E0217 08:42:42.112061 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:43 crc kubenswrapper[4813]: I0217 08:42:43.110474 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:43 crc kubenswrapper[4813]: I0217 08:42:43.110625 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:43 crc kubenswrapper[4813]: E0217 08:42:43.113581 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:43 crc kubenswrapper[4813]: E0217 08:42:43.113968 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:44 crc kubenswrapper[4813]: I0217 08:42:44.110229 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:44 crc kubenswrapper[4813]: I0217 08:42:44.110341 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:44 crc kubenswrapper[4813]: E0217 08:42:44.110682 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:44 crc kubenswrapper[4813]: E0217 08:42:44.110811 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:45 crc kubenswrapper[4813]: I0217 08:42:45.110547 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:45 crc kubenswrapper[4813]: I0217 08:42:45.110703 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:45 crc kubenswrapper[4813]: E0217 08:42:45.110894 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:45 crc kubenswrapper[4813]: E0217 08:42:45.111468 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:45 crc kubenswrapper[4813]: I0217 08:42:45.112036 4813 scope.go:117] "RemoveContainer" containerID="fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532" Feb 17 08:42:45 crc kubenswrapper[4813]: E0217 08:42:45.112355 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qsj6b_openshift-ovn-kubernetes(3513e95a-8ab1-42f1-8aa5-37400db92720)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" Feb 17 08:42:46 crc kubenswrapper[4813]: I0217 08:42:46.110525 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:46 crc kubenswrapper[4813]: I0217 08:42:46.110615 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:46 crc kubenswrapper[4813]: E0217 08:42:46.110750 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:46 crc kubenswrapper[4813]: E0217 08:42:46.110903 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:47 crc kubenswrapper[4813]: I0217 08:42:47.110712 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:47 crc kubenswrapper[4813]: I0217 08:42:47.110758 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:47 crc kubenswrapper[4813]: E0217 08:42:47.110913 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:47 crc kubenswrapper[4813]: E0217 08:42:47.111035 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:47 crc kubenswrapper[4813]: I0217 08:42:47.774655 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swpdn_9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0/kube-multus/1.log" Feb 17 08:42:47 crc kubenswrapper[4813]: I0217 08:42:47.775384 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swpdn_9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0/kube-multus/0.log" Feb 17 08:42:47 crc kubenswrapper[4813]: I0217 08:42:47.775452 4813 generic.go:334] "Generic (PLEG): container finished" podID="9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0" containerID="05624bdef750b78c738592304644643c1215e4d7441442c786ced24049d9ce40" exitCode=1 Feb 17 08:42:47 crc kubenswrapper[4813]: I0217 08:42:47.775494 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-swpdn" event={"ID":"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0","Type":"ContainerDied","Data":"05624bdef750b78c738592304644643c1215e4d7441442c786ced24049d9ce40"} Feb 17 08:42:47 crc kubenswrapper[4813]: I0217 08:42:47.775540 4813 scope.go:117] "RemoveContainer" containerID="fa88e49a7d4a85e6d48aecbb951aac7034d9178a567b32640e322d6e0ce492bc" Feb 17 08:42:47 crc kubenswrapper[4813]: I0217 08:42:47.776124 4813 scope.go:117] "RemoveContainer" containerID="05624bdef750b78c738592304644643c1215e4d7441442c786ced24049d9ce40" Feb 17 08:42:47 crc kubenswrapper[4813]: E0217 08:42:47.776500 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-swpdn_openshift-multus(9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0)\"" pod="openshift-multus/multus-swpdn" podUID="9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0" Feb 17 08:42:48 crc kubenswrapper[4813]: I0217 08:42:48.110255 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:48 crc kubenswrapper[4813]: I0217 08:42:48.110255 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:48 crc kubenswrapper[4813]: E0217 08:42:48.110460 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:48 crc kubenswrapper[4813]: E0217 08:42:48.110553 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:48 crc kubenswrapper[4813]: I0217 08:42:48.780719 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swpdn_9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0/kube-multus/1.log" Feb 17 08:42:49 crc kubenswrapper[4813]: I0217 08:42:49.110967 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:49 crc kubenswrapper[4813]: I0217 08:42:49.111023 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:49 crc kubenswrapper[4813]: E0217 08:42:49.111201 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:49 crc kubenswrapper[4813]: E0217 08:42:49.111482 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:50 crc kubenswrapper[4813]: I0217 08:42:50.110471 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:50 crc kubenswrapper[4813]: I0217 08:42:50.110558 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:50 crc kubenswrapper[4813]: E0217 08:42:50.110641 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:50 crc kubenswrapper[4813]: E0217 08:42:50.110779 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:51 crc kubenswrapper[4813]: I0217 08:42:51.111273 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:51 crc kubenswrapper[4813]: I0217 08:42:51.111425 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:51 crc kubenswrapper[4813]: E0217 08:42:51.111600 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:51 crc kubenswrapper[4813]: E0217 08:42:51.111890 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:52 crc kubenswrapper[4813]: I0217 08:42:52.110088 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:52 crc kubenswrapper[4813]: I0217 08:42:52.110209 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:52 crc kubenswrapper[4813]: E0217 08:42:52.110337 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:52 crc kubenswrapper[4813]: E0217 08:42:52.110534 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:53 crc kubenswrapper[4813]: E0217 08:42:53.043648 4813 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 17 08:42:53 crc kubenswrapper[4813]: I0217 08:42:53.110370 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:53 crc kubenswrapper[4813]: I0217 08:42:53.110482 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:53 crc kubenswrapper[4813]: E0217 08:42:53.112335 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:53 crc kubenswrapper[4813]: E0217 08:42:53.112452 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:53 crc kubenswrapper[4813]: E0217 08:42:53.230723 4813 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:42:54 crc kubenswrapper[4813]: I0217 08:42:54.110758 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:54 crc kubenswrapper[4813]: I0217 08:42:54.110876 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:54 crc kubenswrapper[4813]: E0217 08:42:54.110962 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:54 crc kubenswrapper[4813]: E0217 08:42:54.111095 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:55 crc kubenswrapper[4813]: I0217 08:42:55.111141 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:55 crc kubenswrapper[4813]: I0217 08:42:55.111159 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:55 crc kubenswrapper[4813]: E0217 08:42:55.111415 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:55 crc kubenswrapper[4813]: E0217 08:42:55.111628 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:56 crc kubenswrapper[4813]: I0217 08:42:56.110840 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:56 crc kubenswrapper[4813]: I0217 08:42:56.110936 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:56 crc kubenswrapper[4813]: E0217 08:42:56.111091 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:56 crc kubenswrapper[4813]: E0217 08:42:56.111210 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:57 crc kubenswrapper[4813]: I0217 08:42:57.110410 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:57 crc kubenswrapper[4813]: I0217 08:42:57.110444 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:57 crc kubenswrapper[4813]: E0217 08:42:57.110714 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:57 crc kubenswrapper[4813]: E0217 08:42:57.110759 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:42:58 crc kubenswrapper[4813]: I0217 08:42:58.110626 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:42:58 crc kubenswrapper[4813]: I0217 08:42:58.110630 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:42:58 crc kubenswrapper[4813]: E0217 08:42:58.110809 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:42:58 crc kubenswrapper[4813]: E0217 08:42:58.111006 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:42:58 crc kubenswrapper[4813]: E0217 08:42:58.232439 4813 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:42:59 crc kubenswrapper[4813]: I0217 08:42:59.110816 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:42:59 crc kubenswrapper[4813]: E0217 08:42:59.110996 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:42:59 crc kubenswrapper[4813]: I0217 08:42:59.110816 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:42:59 crc kubenswrapper[4813]: E0217 08:42:59.111356 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:43:00 crc kubenswrapper[4813]: I0217 08:43:00.110761 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:43:00 crc kubenswrapper[4813]: I0217 08:43:00.110876 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:43:00 crc kubenswrapper[4813]: E0217 08:43:00.110948 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:43:00 crc kubenswrapper[4813]: E0217 08:43:00.111422 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:43:00 crc kubenswrapper[4813]: I0217 08:43:00.112078 4813 scope.go:117] "RemoveContainer" containerID="05624bdef750b78c738592304644643c1215e4d7441442c786ced24049d9ce40" Feb 17 08:43:00 crc kubenswrapper[4813]: I0217 08:43:00.112845 4813 scope.go:117] "RemoveContainer" containerID="fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532" Feb 17 08:43:00 crc kubenswrapper[4813]: I0217 08:43:00.865291 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovnkube-controller/3.log" Feb 17 08:43:00 crc kubenswrapper[4813]: I0217 08:43:00.868767 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerStarted","Data":"c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7"} Feb 17 08:43:00 crc kubenswrapper[4813]: I0217 08:43:00.869432 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:43:00 crc kubenswrapper[4813]: I0217 08:43:00.871652 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swpdn_9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0/kube-multus/1.log" Feb 17 08:43:00 crc kubenswrapper[4813]: I0217 08:43:00.871735 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-swpdn" event={"ID":"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0","Type":"ContainerStarted","Data":"ca05934e6e6052c9b3a5fdb83e9bbbf8a47816ebac6cb95bb2f96e065b933856"} Feb 17 08:43:00 crc kubenswrapper[4813]: I0217 08:43:00.914918 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podStartSLOduration=106.914894614 podStartE2EDuration="1m46.914894614s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:00.913241959 +0000 UTC m=+128.574003222" watchObservedRunningTime="2026-02-17 08:43:00.914894614 +0000 UTC m=+128.575655877" Feb 17 08:43:01 crc kubenswrapper[4813]: I0217 08:43:01.110513 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:43:01 crc kubenswrapper[4813]: E0217 08:43:01.110689 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:43:01 crc kubenswrapper[4813]: I0217 08:43:01.110816 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:43:01 crc kubenswrapper[4813]: E0217 08:43:01.110991 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:43:01 crc kubenswrapper[4813]: I0217 08:43:01.139493 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-srrq7"] Feb 17 08:43:01 crc kubenswrapper[4813]: I0217 08:43:01.139677 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:43:01 crc kubenswrapper[4813]: E0217 08:43:01.139817 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:43:02 crc kubenswrapper[4813]: I0217 08:43:02.110296 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:43:02 crc kubenswrapper[4813]: E0217 08:43:02.110448 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:43:03 crc kubenswrapper[4813]: I0217 08:43:03.110389 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:43:03 crc kubenswrapper[4813]: I0217 08:43:03.110416 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:43:03 crc kubenswrapper[4813]: I0217 08:43:03.110527 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:43:03 crc kubenswrapper[4813]: E0217 08:43:03.112330 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:43:03 crc kubenswrapper[4813]: E0217 08:43:03.112445 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:43:03 crc kubenswrapper[4813]: E0217 08:43:03.112571 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:43:03 crc kubenswrapper[4813]: E0217 08:43:03.233533 4813 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:43:04 crc kubenswrapper[4813]: I0217 08:43:04.110623 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:43:04 crc kubenswrapper[4813]: E0217 08:43:04.110827 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:43:05 crc kubenswrapper[4813]: I0217 08:43:05.110716 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:43:05 crc kubenswrapper[4813]: I0217 08:43:05.110726 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:43:05 crc kubenswrapper[4813]: E0217 08:43:05.110915 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:43:05 crc kubenswrapper[4813]: I0217 08:43:05.110947 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:43:05 crc kubenswrapper[4813]: E0217 08:43:05.111110 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:43:05 crc kubenswrapper[4813]: E0217 08:43:05.111344 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:43:06 crc kubenswrapper[4813]: I0217 08:43:06.110560 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:43:06 crc kubenswrapper[4813]: E0217 08:43:06.110732 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:43:07 crc kubenswrapper[4813]: I0217 08:43:07.110832 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:43:07 crc kubenswrapper[4813]: I0217 08:43:07.110897 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:43:07 crc kubenswrapper[4813]: E0217 08:43:07.111026 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 08:43:07 crc kubenswrapper[4813]: I0217 08:43:07.111073 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:43:07 crc kubenswrapper[4813]: E0217 08:43:07.111268 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-srrq7" podUID="b42b143b-e85b-44cc-a427-ba1ebd82c55b" Feb 17 08:43:07 crc kubenswrapper[4813]: E0217 08:43:07.111442 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 08:43:08 crc kubenswrapper[4813]: I0217 08:43:08.110130 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:43:08 crc kubenswrapper[4813]: E0217 08:43:08.110395 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 08:43:09 crc kubenswrapper[4813]: I0217 08:43:09.110151 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:43:09 crc kubenswrapper[4813]: I0217 08:43:09.110644 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:43:09 crc kubenswrapper[4813]: I0217 08:43:09.111393 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:43:09 crc kubenswrapper[4813]: I0217 08:43:09.113855 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 08:43:09 crc kubenswrapper[4813]: I0217 08:43:09.113947 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 08:43:09 crc kubenswrapper[4813]: I0217 08:43:09.114502 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 08:43:09 crc kubenswrapper[4813]: I0217 08:43:09.116022 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 08:43:09 crc kubenswrapper[4813]: I0217 08:43:09.116205 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 08:43:09 crc kubenswrapper[4813]: I0217 08:43:09.116925 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 08:43:10 crc kubenswrapper[4813]: I0217 08:43:10.111086 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.439372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.490561 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.491736 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.496281 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.501186 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.502056 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.502440 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.502734 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.502824 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.503626 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.504108 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fkwkx"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.504968 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.506515 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.506706 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.507298 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.508165 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.508781 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gjzhl"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.508885 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.509611 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.511576 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qxxm8"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.518010 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.525453 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.525549 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.525730 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.525776 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.525912 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.525926 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.525956 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.526303 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.526678 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.526779 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.527050 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.527096 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.527512 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.528232 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.528885 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.535570 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.536130 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.538613 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9kvp9"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.547480 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.548500 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.548755 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9kvp9" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.548926 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.561665 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.562187 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.562415 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.562398 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.562580 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.562934 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.563153 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.563412 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.562216 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.563944 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.563962 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.564087 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.564211 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.564296 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.564476 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.564714 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.564911 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.565648 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-29bxl"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.566167 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.566191 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-7rbpj"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.566763 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7rbpj" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.567932 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-75569"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.568781 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.572446 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.573065 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.573596 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-cf6d7"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.574303 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x222k"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.575043 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.573941 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.575952 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-cf6d7" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.573978 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.578216 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.579567 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.580543 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mg6kk"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.579916 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.579997 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.580231 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.580259 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.582929 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mg6kk" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.580423 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.580452 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.584244 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.580485 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.581550 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.587709 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.581816 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.581858 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.581897 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.581939 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.582005 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.582042 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.582049 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.582833 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.582877 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.589056 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.591081 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.591834 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-l2l9m"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.591996 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.594634 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.595091 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.595408 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.595567 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.595853 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.596140 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.596288 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.597278 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.597575 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.597835 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.598030 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.598204 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.598702 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.598912 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.599068 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.599191 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.599285 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.599400 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.599521 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.599619 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.600430 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.600812 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.600967 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.601253 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.603958 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.604405 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.608424 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8j65n"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.609052 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.609078 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8j65n" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.622350 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-62lng"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.654271 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.654533 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.654730 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.655498 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.655694 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.655917 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.656744 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.657767 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.658199 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.658484 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.658672 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.658813 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.658872 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62lng" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.659566 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/17c04723-efa9-4b51-9213-3a22b548b114-machine-approver-tls\") pod \"machine-approver-56656f9798-7rx4l\" (UID: \"17c04723-efa9-4b51-9213-3a22b548b114\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.659615 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1bd1ba13-d69b-400b-8337-ad2938a9452d-audit-dir\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.659644 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zcdq\" (UniqueName: \"kubernetes.io/projected/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-kube-api-access-2zcdq\") pod \"route-controller-manager-6576b87f9c-42mx8\" (UID: \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.659668 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-config\") pod \"controller-manager-879f6c89f-fkwkx\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.659689 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/712b5668-146d-4a1f-a86d-fb65b56697b7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hzjbh\" (UID: \"712b5668-146d-4a1f-a86d-fb65b56697b7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.659711 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf4b8a5c-06c8-4206-852c-3e58e2e35bca-config\") pod \"machine-api-operator-5694c8668f-gjzhl\" (UID: \"bf4b8a5c-06c8-4206-852c-3e58e2e35bca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.659732 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf4b8a5c-06c8-4206-852c-3e58e2e35bca-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gjzhl\" (UID: \"bf4b8a5c-06c8-4206-852c-3e58e2e35bca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.659752 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q2cr\" (UniqueName: \"kubernetes.io/projected/bf4b8a5c-06c8-4206-852c-3e58e2e35bca-kube-api-access-4q2cr\") pod \"machine-api-operator-5694c8668f-gjzhl\" (UID: \"bf4b8a5c-06c8-4206-852c-3e58e2e35bca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.659774 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bd1ba13-d69b-400b-8337-ad2938a9452d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.659794 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-config\") pod \"route-controller-manager-6576b87f9c-42mx8\" (UID: \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.659826 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bd1ba13-d69b-400b-8337-ad2938a9452d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.659849 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bd1ba13-d69b-400b-8337-ad2938a9452d-serving-cert\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.659869 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a5230ee-cb95-4c2b-984a-3ed9286f4c45-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9kvp9\" (UID: \"9a5230ee-cb95-4c2b-984a-3ed9286f4c45\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9kvp9" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.659894 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/967e0fab-3a56-4541-b3aa-e626e2c524c1-serving-cert\") pod \"authentication-operator-69f744f599-qxxm8\" (UID: \"967e0fab-3a56-4541-b3aa-e626e2c524c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.659977 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c7dn\" (UniqueName: \"kubernetes.io/projected/17c04723-efa9-4b51-9213-3a22b548b114-kube-api-access-8c7dn\") pod \"machine-approver-56656f9798-7rx4l\" (UID: \"17c04723-efa9-4b51-9213-3a22b548b114\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660051 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dszt\" (UniqueName: \"kubernetes.io/projected/967e0fab-3a56-4541-b3aa-e626e2c524c1-kube-api-access-2dszt\") pod \"authentication-operator-69f744f599-qxxm8\" (UID: \"967e0fab-3a56-4541-b3aa-e626e2c524c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660077 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-client-ca\") pod \"controller-manager-879f6c89f-fkwkx\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660103 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bd1ba13-d69b-400b-8337-ad2938a9452d-encryption-config\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660126 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-serving-cert\") pod \"route-controller-manager-6576b87f9c-42mx8\" (UID: \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660165 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f185fcc-0363-430b-a331-0e8ea791f9f6-serving-cert\") pod \"controller-manager-879f6c89f-fkwkx\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660181 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2bqf\" (UniqueName: \"kubernetes.io/projected/9f185fcc-0363-430b-a331-0e8ea791f9f6-kube-api-access-f2bqf\") pod \"controller-manager-879f6c89f-fkwkx\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660197 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/967e0fab-3a56-4541-b3aa-e626e2c524c1-config\") pod \"authentication-operator-69f744f599-qxxm8\" (UID: \"967e0fab-3a56-4541-b3aa-e626e2c524c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660216 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fkwkx\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660242 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bd1ba13-d69b-400b-8337-ad2938a9452d-etcd-client\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660261 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bf4b8a5c-06c8-4206-852c-3e58e2e35bca-images\") pod \"machine-api-operator-5694c8668f-gjzhl\" (UID: \"bf4b8a5c-06c8-4206-852c-3e58e2e35bca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660282 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967e0fab-3a56-4541-b3aa-e626e2c524c1-service-ca-bundle\") pod \"authentication-operator-69f744f599-qxxm8\" (UID: \"967e0fab-3a56-4541-b3aa-e626e2c524c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660319 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/17c04723-efa9-4b51-9213-3a22b548b114-auth-proxy-config\") pod \"machine-approver-56656f9798-7rx4l\" (UID: \"17c04723-efa9-4b51-9213-3a22b548b114\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660351 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnggm\" (UniqueName: \"kubernetes.io/projected/712b5668-146d-4a1f-a86d-fb65b56697b7-kube-api-access-xnggm\") pod \"openshift-apiserver-operator-796bbdcf4f-hzjbh\" (UID: \"712b5668-146d-4a1f-a86d-fb65b56697b7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660374 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/712b5668-146d-4a1f-a86d-fb65b56697b7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hzjbh\" (UID: \"712b5668-146d-4a1f-a86d-fb65b56697b7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660405 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1bd1ba13-d69b-400b-8337-ad2938a9452d-audit-policies\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660452 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf5x9\" (UniqueName: \"kubernetes.io/projected/9a5230ee-cb95-4c2b-984a-3ed9286f4c45-kube-api-access-qf5x9\") pod \"cluster-samples-operator-665b6dd947-9kvp9\" (UID: \"9a5230ee-cb95-4c2b-984a-3ed9286f4c45\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9kvp9" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660481 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967e0fab-3a56-4541-b3aa-e626e2c524c1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qxxm8\" (UID: \"967e0fab-3a56-4541-b3aa-e626e2c524c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660504 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c04723-efa9-4b51-9213-3a22b548b114-config\") pod \"machine-approver-56656f9798-7rx4l\" (UID: \"17c04723-efa9-4b51-9213-3a22b548b114\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660525 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9kdz\" (UniqueName: \"kubernetes.io/projected/1bd1ba13-d69b-400b-8337-ad2938a9452d-kube-api-access-z9kdz\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660539 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-client-ca\") pod \"route-controller-manager-6576b87f9c-42mx8\" (UID: \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.660977 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.661602 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.661923 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.662117 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.663188 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.664528 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.668777 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.671639 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.679525 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.691521 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-z2wgd"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.692127 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.693720 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.694828 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dqskm"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.695247 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.696135 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.696701 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.697547 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.697904 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.699713 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.700180 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.700294 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.701420 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.701771 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.702131 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.703457 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.704447 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.706264 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9pw9r"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.706904 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.707154 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lvbmn"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.707864 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lvbmn" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.710530 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fkwkx"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.711658 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.713106 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9kvp9"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.713188 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.721169 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gjzhl"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.721242 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qxxm8"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.722665 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-55zl9"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.723807 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-whhpn"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.724727 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-55zl9" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.727251 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lbvg5"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.728036 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lbvg5" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.728211 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.728286 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-77nlz"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.728735 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.728837 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-77nlz" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.729973 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.730482 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.731082 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7rbpj"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.732252 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x222k"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.733222 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.733348 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-75569"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.736033 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.737475 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.738558 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-29bxl"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.739857 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.740945 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.742234 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.743792 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mg6kk"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.744983 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.747917 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.748757 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-l2l9m"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.749849 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.751153 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.752209 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.753370 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.753380 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.754551 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dqskm"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.755651 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.756586 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.757643 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8j65n"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.758722 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-cf6d7"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.759707 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lvbmn"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.760698 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-86t4c"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761168 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24s8v\" (UniqueName: \"kubernetes.io/projected/dcabcb2d-1368-4303-b9e8-f7fd269ce1ca-kube-api-access-24s8v\") pod \"downloads-7954f5f757-7rbpj\" (UID: \"dcabcb2d-1368-4303-b9e8-f7fd269ce1ca\") " pod="openshift-console/downloads-7954f5f757-7rbpj" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761207 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761234 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp75m\" (UniqueName: \"kubernetes.io/projected/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-kube-api-access-rp75m\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761260 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/690deab5-f2ed-4fa4-8191-7bd7a625b924-proxy-tls\") pod \"machine-config-controller-84d6567774-zw6c4\" (UID: \"690deab5-f2ed-4fa4-8191-7bd7a625b924\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761295 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-serving-cert\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761367 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5bee7657-82f7-4887-82e8-6029f64d52f5-metrics-tls\") pod \"dns-operator-744455d44c-mg6kk\" (UID: \"5bee7657-82f7-4887-82e8-6029f64d52f5\") " pod="openshift-dns-operator/dns-operator-744455d44c-mg6kk" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761391 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rj2b\" (UniqueName: \"kubernetes.io/projected/5bee7657-82f7-4887-82e8-6029f64d52f5-kube-api-access-7rj2b\") pod \"dns-operator-744455d44c-mg6kk\" (UID: \"5bee7657-82f7-4887-82e8-6029f64d52f5\") " pod="openshift-dns-operator/dns-operator-744455d44c-mg6kk" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761421 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38bc9742-49be-4bb0-924e-2ce0db82ec2e-serving-cert\") pod \"openshift-config-operator-7777fb866f-75569\" (UID: \"38bc9742-49be-4bb0-924e-2ce0db82ec2e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761437 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eb317bb8-7cfa-4865-8390-c7be4460c44b-encryption-config\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761455 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a5230ee-cb95-4c2b-984a-3ed9286f4c45-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9kvp9\" (UID: \"9a5230ee-cb95-4c2b-984a-3ed9286f4c45\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9kvp9" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761472 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3147530a-17b5-4388-85a2-4644ff82a31a-serving-cert\") pod \"console-operator-58897d9998-cf6d7\" (UID: \"3147530a-17b5-4388-85a2-4644ff82a31a\") " pod="openshift-console-operator/console-operator-58897d9998-cf6d7" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761489 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bd1ba13-d69b-400b-8337-ad2938a9452d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761505 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bd1ba13-d69b-400b-8337-ad2938a9452d-serving-cert\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761524 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcfgd\" (UniqueName: \"kubernetes.io/projected/e48a8873-082c-40b8-8dca-60190e4772b2-kube-api-access-dcfgd\") pod \"openshift-controller-manager-operator-756b6f6bc6-8r6b6\" (UID: \"e48a8873-082c-40b8-8dca-60190e4772b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761543 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/967e0fab-3a56-4541-b3aa-e626e2c524c1-serving-cert\") pod \"authentication-operator-69f744f599-qxxm8\" (UID: \"967e0fab-3a56-4541-b3aa-e626e2c524c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761560 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3147530a-17b5-4388-85a2-4644ff82a31a-trusted-ca\") pod \"console-operator-58897d9998-cf6d7\" (UID: \"3147530a-17b5-4388-85a2-4644ff82a31a\") " pod="openshift-console-operator/console-operator-58897d9998-cf6d7" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761573 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-86t4c" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761584 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69c6afc8-1fa5-44e1-8ba7-bcf6cd506952-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-n8v5w\" (UID: \"69c6afc8-1fa5-44e1-8ba7-bcf6cd506952\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761600 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c7dn\" (UniqueName: \"kubernetes.io/projected/17c04723-efa9-4b51-9213-3a22b548b114-kube-api-access-8c7dn\") pod \"machine-approver-56656f9798-7rx4l\" (UID: \"17c04723-efa9-4b51-9213-3a22b548b114\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761616 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/edb8ea95-7464-414b-8208-03a3a8426d74-tmpfs\") pod \"packageserver-d55dfcdfc-7bmzh\" (UID: \"edb8ea95-7464-414b-8208-03a3a8426d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761633 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-oauth-config\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761685 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91672759-be09-404a-b4f4-ccbb995f9209-config\") pod \"kube-controller-manager-operator-78b949d7b-qrm4l\" (UID: \"91672759-be09-404a-b4f4-ccbb995f9209\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761703 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-config\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761720 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/690deab5-f2ed-4fa4-8191-7bd7a625b924-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zw6c4\" (UID: \"690deab5-f2ed-4fa4-8191-7bd7a625b924\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761737 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjzt5\" (UniqueName: \"kubernetes.io/projected/38bc9742-49be-4bb0-924e-2ce0db82ec2e-kube-api-access-gjzt5\") pod \"openshift-config-operator-7777fb866f-75569\" (UID: \"38bc9742-49be-4bb0-924e-2ce0db82ec2e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761755 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eb317bb8-7cfa-4865-8390-c7be4460c44b-etcd-client\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761775 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dszt\" (UniqueName: \"kubernetes.io/projected/967e0fab-3a56-4541-b3aa-e626e2c524c1-kube-api-access-2dszt\") pod \"authentication-operator-69f744f599-qxxm8\" (UID: \"967e0fab-3a56-4541-b3aa-e626e2c524c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761793 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-audit-policies\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761809 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91672759-be09-404a-b4f4-ccbb995f9209-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qrm4l\" (UID: \"91672759-be09-404a-b4f4-ccbb995f9209\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761825 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/edb8ea95-7464-414b-8208-03a3a8426d74-apiservice-cert\") pod \"packageserver-d55dfcdfc-7bmzh\" (UID: \"edb8ea95-7464-414b-8208-03a3a8426d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761833 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-62lng"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761843 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-client-ca\") pod \"controller-manager-879f6c89f-fkwkx\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761860 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761879 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb317bb8-7cfa-4865-8390-c7be4460c44b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761918 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69c6afc8-1fa5-44e1-8ba7-bcf6cd506952-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-n8v5w\" (UID: \"69c6afc8-1fa5-44e1-8ba7-bcf6cd506952\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.761969 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bd1ba13-d69b-400b-8337-ad2938a9452d-encryption-config\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.762754 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3147530a-17b5-4388-85a2-4644ff82a31a-config\") pod \"console-operator-58897d9998-cf6d7\" (UID: \"3147530a-17b5-4388-85a2-4644ff82a31a\") " pod="openshift-console-operator/console-operator-58897d9998-cf6d7" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.762797 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/69c6afc8-1fa5-44e1-8ba7-bcf6cd506952-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-n8v5w\" (UID: \"69c6afc8-1fa5-44e1-8ba7-bcf6cd506952\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.762834 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-serving-cert\") pod \"route-controller-manager-6576b87f9c-42mx8\" (UID: \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.762858 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4bnt\" (UniqueName: \"kubernetes.io/projected/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-kube-api-access-m4bnt\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.762886 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eb317bb8-7cfa-4865-8390-c7be4460c44b-image-import-ca\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.762913 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f185fcc-0363-430b-a331-0e8ea791f9f6-serving-cert\") pod \"controller-manager-879f6c89f-fkwkx\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.762943 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2bqf\" (UniqueName: \"kubernetes.io/projected/9f185fcc-0363-430b-a331-0e8ea791f9f6-kube-api-access-f2bqf\") pod \"controller-manager-879f6c89f-fkwkx\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.762967 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/967e0fab-3a56-4541-b3aa-e626e2c524c1-config\") pod \"authentication-operator-69f744f599-qxxm8\" (UID: \"967e0fab-3a56-4541-b3aa-e626e2c524c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.762987 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bd1ba13-d69b-400b-8337-ad2938a9452d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.762989 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bd1ba13-d69b-400b-8337-ad2938a9452d-etcd-client\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.762802 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pqvf4"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.763029 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-client-ca\") pod \"controller-manager-879f6c89f-fkwkx\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.763087 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fkwkx\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.763111 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91672759-be09-404a-b4f4-ccbb995f9209-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qrm4l\" (UID: \"91672759-be09-404a-b4f4-ccbb995f9209\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.763130 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9lv4\" (UniqueName: \"kubernetes.io/projected/edb8ea95-7464-414b-8208-03a3a8426d74-kube-api-access-g9lv4\") pod \"packageserver-d55dfcdfc-7bmzh\" (UID: \"edb8ea95-7464-414b-8208-03a3a8426d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.763690 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/967e0fab-3a56-4541-b3aa-e626e2c524c1-config\") pod \"authentication-operator-69f744f599-qxxm8\" (UID: \"967e0fab-3a56-4541-b3aa-e626e2c524c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.764134 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.764226 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.764583 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fkwkx\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.764652 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bf4b8a5c-06c8-4206-852c-3e58e2e35bca-images\") pod \"machine-api-operator-5694c8668f-gjzhl\" (UID: \"bf4b8a5c-06c8-4206-852c-3e58e2e35bca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.764677 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc97m\" (UniqueName: \"kubernetes.io/projected/78c22549-ebe5-4e3a-8805-d180911a3c94-kube-api-access-qc97m\") pod \"olm-operator-6b444d44fb-dj88q\" (UID: \"78c22549-ebe5-4e3a-8805-d180911a3c94\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.764711 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967e0fab-3a56-4541-b3aa-e626e2c524c1-service-ca-bundle\") pod \"authentication-operator-69f744f599-qxxm8\" (UID: \"967e0fab-3a56-4541-b3aa-e626e2c524c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.764723 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-whhpn"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.764729 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dbls\" (UniqueName: \"kubernetes.io/projected/11abe3c6-6bcb-4453-ba5d-71329e039ccc-kube-api-access-4dbls\") pod \"migrator-59844c95c7-62lng\" (UID: \"11abe3c6-6bcb-4453-ba5d-71329e039ccc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62lng" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.765296 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bf4b8a5c-06c8-4206-852c-3e58e2e35bca-images\") pod \"machine-api-operator-5694c8668f-gjzhl\" (UID: \"bf4b8a5c-06c8-4206-852c-3e58e2e35bca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.765576 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/17c04723-efa9-4b51-9213-3a22b548b114-auth-proxy-config\") pod \"machine-approver-56656f9798-7rx4l\" (UID: \"17c04723-efa9-4b51-9213-3a22b548b114\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.765639 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-audit-dir\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.765673 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.765713 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-trusted-ca-bundle\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.765740 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5f29fce-8bc9-48cb-b808-f55cb2e25c31-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8j65n\" (UID: \"e5f29fce-8bc9-48cb-b808-f55cb2e25c31\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8j65n" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.765760 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-86t4c"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766075 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967e0fab-3a56-4541-b3aa-e626e2c524c1-service-ca-bundle\") pod \"authentication-operator-69f744f599-qxxm8\" (UID: \"967e0fab-3a56-4541-b3aa-e626e2c524c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766149 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rczgx\" (UniqueName: \"kubernetes.io/projected/690deab5-f2ed-4fa4-8191-7bd7a625b924-kube-api-access-rczgx\") pod \"machine-config-controller-84d6567774-zw6c4\" (UID: \"690deab5-f2ed-4fa4-8191-7bd7a625b924\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766178 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eb317bb8-7cfa-4865-8390-c7be4460c44b-audit\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766209 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnggm\" (UniqueName: \"kubernetes.io/projected/712b5668-146d-4a1f-a86d-fb65b56697b7-kube-api-access-xnggm\") pod \"openshift-apiserver-operator-796bbdcf4f-hzjbh\" (UID: \"712b5668-146d-4a1f-a86d-fb65b56697b7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766241 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njt9f\" (UniqueName: \"kubernetes.io/projected/3147530a-17b5-4388-85a2-4644ff82a31a-kube-api-access-njt9f\") pod \"console-operator-58897d9998-cf6d7\" (UID: \"3147530a-17b5-4388-85a2-4644ff82a31a\") " pod="openshift-console-operator/console-operator-58897d9998-cf6d7" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766260 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e48a8873-082c-40b8-8dca-60190e4772b2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8r6b6\" (UID: \"e48a8873-082c-40b8-8dca-60190e4772b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766362 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766388 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/17c04723-efa9-4b51-9213-3a22b548b114-auth-proxy-config\") pod \"machine-approver-56656f9798-7rx4l\" (UID: \"17c04723-efa9-4b51-9213-3a22b548b114\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766399 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/712b5668-146d-4a1f-a86d-fb65b56697b7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hzjbh\" (UID: \"712b5668-146d-4a1f-a86d-fb65b56697b7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766461 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766483 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/38bc9742-49be-4bb0-924e-2ce0db82ec2e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-75569\" (UID: \"38bc9742-49be-4bb0-924e-2ce0db82ec2e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766536 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1bd1ba13-d69b-400b-8337-ad2938a9452d-audit-policies\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766559 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766603 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/edb8ea95-7464-414b-8208-03a3a8426d74-webhook-cert\") pod \"packageserver-d55dfcdfc-7bmzh\" (UID: \"edb8ea95-7464-414b-8208-03a3a8426d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766626 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-oauth-serving-cert\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766652 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xp6g\" (UniqueName: \"kubernetes.io/projected/e5f29fce-8bc9-48cb-b808-f55cb2e25c31-kube-api-access-8xp6g\") pod \"control-plane-machine-set-operator-78cbb6b69f-8j65n\" (UID: \"e5f29fce-8bc9-48cb-b808-f55cb2e25c31\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8j65n" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766676 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb317bb8-7cfa-4865-8390-c7be4460c44b-config\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.766997 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bd1ba13-d69b-400b-8337-ad2938a9452d-encryption-config\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767005 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/78c22549-ebe5-4e3a-8805-d180911a3c94-srv-cert\") pod \"olm-operator-6b444d44fb-dj88q\" (UID: \"78c22549-ebe5-4e3a-8805-d180911a3c94\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767066 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/78c22549-ebe5-4e3a-8805-d180911a3c94-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dj88q\" (UID: \"78c22549-ebe5-4e3a-8805-d180911a3c94\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767103 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/712b5668-146d-4a1f-a86d-fb65b56697b7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hzjbh\" (UID: \"712b5668-146d-4a1f-a86d-fb65b56697b7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767106 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf5x9\" (UniqueName: \"kubernetes.io/projected/9a5230ee-cb95-4c2b-984a-3ed9286f4c45-kube-api-access-qf5x9\") pod \"cluster-samples-operator-665b6dd947-9kvp9\" (UID: \"9a5230ee-cb95-4c2b-984a-3ed9286f4c45\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9kvp9" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767141 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767165 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767324 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c04723-efa9-4b51-9213-3a22b548b114-config\") pod \"machine-approver-56656f9798-7rx4l\" (UID: \"17c04723-efa9-4b51-9213-3a22b548b114\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767357 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1bd1ba13-d69b-400b-8337-ad2938a9452d-audit-policies\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767359 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967e0fab-3a56-4541-b3aa-e626e2c524c1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qxxm8\" (UID: \"967e0fab-3a56-4541-b3aa-e626e2c524c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767430 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767438 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bd1ba13-d69b-400b-8337-ad2938a9452d-etcd-client\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767456 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-service-ca\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767497 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9kdz\" (UniqueName: \"kubernetes.io/projected/1bd1ba13-d69b-400b-8337-ad2938a9452d-kube-api-access-z9kdz\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767523 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-client-ca\") pod \"route-controller-manager-6576b87f9c-42mx8\" (UID: \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767548 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb317bb8-7cfa-4865-8390-c7be4460c44b-node-pullsecrets\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767577 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e48a8873-082c-40b8-8dca-60190e4772b2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8r6b6\" (UID: \"e48a8873-082c-40b8-8dca-60190e4772b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767655 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f185fcc-0363-430b-a331-0e8ea791f9f6-serving-cert\") pod \"controller-manager-879f6c89f-fkwkx\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767700 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767733 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/17c04723-efa9-4b51-9213-3a22b548b114-machine-approver-tls\") pod \"machine-approver-56656f9798-7rx4l\" (UID: \"17c04723-efa9-4b51-9213-3a22b548b114\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767757 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb317bb8-7cfa-4865-8390-c7be4460c44b-audit-dir\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767782 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1bd1ba13-d69b-400b-8337-ad2938a9452d-audit-dir\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767809 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zcdq\" (UniqueName: \"kubernetes.io/projected/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-kube-api-access-2zcdq\") pod \"route-controller-manager-6576b87f9c-42mx8\" (UID: \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767832 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-config\") pod \"controller-manager-879f6c89f-fkwkx\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767854 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/712b5668-146d-4a1f-a86d-fb65b56697b7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hzjbh\" (UID: \"712b5668-146d-4a1f-a86d-fb65b56697b7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767880 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767907 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf4b8a5c-06c8-4206-852c-3e58e2e35bca-config\") pod \"machine-api-operator-5694c8668f-gjzhl\" (UID: \"bf4b8a5c-06c8-4206-852c-3e58e2e35bca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767931 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf4b8a5c-06c8-4206-852c-3e58e2e35bca-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gjzhl\" (UID: \"bf4b8a5c-06c8-4206-852c-3e58e2e35bca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767952 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q2cr\" (UniqueName: \"kubernetes.io/projected/bf4b8a5c-06c8-4206-852c-3e58e2e35bca-kube-api-access-4q2cr\") pod \"machine-api-operator-5694c8668f-gjzhl\" (UID: \"bf4b8a5c-06c8-4206-852c-3e58e2e35bca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.767978 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.768001 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eb317bb8-7cfa-4865-8390-c7be4460c44b-etcd-serving-ca\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.768021 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb317bb8-7cfa-4865-8390-c7be4460c44b-serving-cert\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.768045 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz4hr\" (UniqueName: \"kubernetes.io/projected/eb317bb8-7cfa-4865-8390-c7be4460c44b-kube-api-access-gz4hr\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.768071 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bd1ba13-d69b-400b-8337-ad2938a9452d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.768097 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-config\") pod \"route-controller-manager-6576b87f9c-42mx8\" (UID: \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.768123 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwmp5\" (UniqueName: \"kubernetes.io/projected/69c6afc8-1fa5-44e1-8ba7-bcf6cd506952-kube-api-access-wwmp5\") pod \"cluster-image-registry-operator-dc59b4c8b-n8v5w\" (UID: \"69c6afc8-1fa5-44e1-8ba7-bcf6cd506952\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.768227 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1bd1ba13-d69b-400b-8337-ad2938a9452d-audit-dir\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.768245 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967e0fab-3a56-4541-b3aa-e626e2c524c1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qxxm8\" (UID: \"967e0fab-3a56-4541-b3aa-e626e2c524c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.768373 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-client-ca\") pod \"route-controller-manager-6576b87f9c-42mx8\" (UID: \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.768412 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.768445 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17c04723-efa9-4b51-9213-3a22b548b114-config\") pod \"machine-approver-56656f9798-7rx4l\" (UID: \"17c04723-efa9-4b51-9213-3a22b548b114\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.768789 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bd1ba13-d69b-400b-8337-ad2938a9452d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.769076 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/967e0fab-3a56-4541-b3aa-e626e2c524c1-serving-cert\") pod \"authentication-operator-69f744f599-qxxm8\" (UID: \"967e0fab-3a56-4541-b3aa-e626e2c524c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.769181 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bd1ba13-d69b-400b-8337-ad2938a9452d-serving-cert\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.769324 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a5230ee-cb95-4c2b-984a-3ed9286f4c45-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9kvp9\" (UID: \"9a5230ee-cb95-4c2b-984a-3ed9286f4c45\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9kvp9" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.769347 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf4b8a5c-06c8-4206-852c-3e58e2e35bca-config\") pod \"machine-api-operator-5694c8668f-gjzhl\" (UID: \"bf4b8a5c-06c8-4206-852c-3e58e2e35bca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.769451 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-config\") pod \"route-controller-manager-6576b87f9c-42mx8\" (UID: \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.769959 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-77nlz"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.770162 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-config\") pod \"controller-manager-879f6c89f-fkwkx\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.770960 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/17c04723-efa9-4b51-9213-3a22b548b114-machine-approver-tls\") pod \"machine-approver-56656f9798-7rx4l\" (UID: \"17c04723-efa9-4b51-9213-3a22b548b114\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.770960 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9pw9r"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.771524 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf4b8a5c-06c8-4206-852c-3e58e2e35bca-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gjzhl\" (UID: \"bf4b8a5c-06c8-4206-852c-3e58e2e35bca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.771789 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-serving-cert\") pod \"route-controller-manager-6576b87f9c-42mx8\" (UID: \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.771941 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lbvg5"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.772202 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/712b5668-146d-4a1f-a86d-fb65b56697b7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hzjbh\" (UID: \"712b5668-146d-4a1f-a86d-fb65b56697b7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.772893 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.773644 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.773893 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pqvf4"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.774849 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lmkjb"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.775762 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lmkjb" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.775773 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lmkjb"] Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.795003 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.813337 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.834260 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.853086 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869176 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rj2b\" (UniqueName: \"kubernetes.io/projected/5bee7657-82f7-4887-82e8-6029f64d52f5-kube-api-access-7rj2b\") pod \"dns-operator-744455d44c-mg6kk\" (UID: \"5bee7657-82f7-4887-82e8-6029f64d52f5\") " pod="openshift-dns-operator/dns-operator-744455d44c-mg6kk" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869273 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-serving-cert\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869297 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5bee7657-82f7-4887-82e8-6029f64d52f5-metrics-tls\") pod \"dns-operator-744455d44c-mg6kk\" (UID: \"5bee7657-82f7-4887-82e8-6029f64d52f5\") " pod="openshift-dns-operator/dns-operator-744455d44c-mg6kk" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869337 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38bc9742-49be-4bb0-924e-2ce0db82ec2e-serving-cert\") pod \"openshift-config-operator-7777fb866f-75569\" (UID: \"38bc9742-49be-4bb0-924e-2ce0db82ec2e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869426 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eb317bb8-7cfa-4865-8390-c7be4460c44b-encryption-config\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869466 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3147530a-17b5-4388-85a2-4644ff82a31a-serving-cert\") pod \"console-operator-58897d9998-cf6d7\" (UID: \"3147530a-17b5-4388-85a2-4644ff82a31a\") " pod="openshift-console-operator/console-operator-58897d9998-cf6d7" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869492 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcfgd\" (UniqueName: \"kubernetes.io/projected/e48a8873-082c-40b8-8dca-60190e4772b2-kube-api-access-dcfgd\") pod \"openshift-controller-manager-operator-756b6f6bc6-8r6b6\" (UID: \"e48a8873-082c-40b8-8dca-60190e4772b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869515 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3147530a-17b5-4388-85a2-4644ff82a31a-trusted-ca\") pod \"console-operator-58897d9998-cf6d7\" (UID: \"3147530a-17b5-4388-85a2-4644ff82a31a\") " pod="openshift-console-operator/console-operator-58897d9998-cf6d7" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869546 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69c6afc8-1fa5-44e1-8ba7-bcf6cd506952-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-n8v5w\" (UID: \"69c6afc8-1fa5-44e1-8ba7-bcf6cd506952\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869576 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/edb8ea95-7464-414b-8208-03a3a8426d74-tmpfs\") pod \"packageserver-d55dfcdfc-7bmzh\" (UID: \"edb8ea95-7464-414b-8208-03a3a8426d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869600 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-oauth-config\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869625 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91672759-be09-404a-b4f4-ccbb995f9209-config\") pod \"kube-controller-manager-operator-78b949d7b-qrm4l\" (UID: \"91672759-be09-404a-b4f4-ccbb995f9209\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869648 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-config\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869674 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/690deab5-f2ed-4fa4-8191-7bd7a625b924-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zw6c4\" (UID: \"690deab5-f2ed-4fa4-8191-7bd7a625b924\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869699 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjzt5\" (UniqueName: \"kubernetes.io/projected/38bc9742-49be-4bb0-924e-2ce0db82ec2e-kube-api-access-gjzt5\") pod \"openshift-config-operator-7777fb866f-75569\" (UID: \"38bc9742-49be-4bb0-924e-2ce0db82ec2e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869734 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-audit-policies\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869776 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91672759-be09-404a-b4f4-ccbb995f9209-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qrm4l\" (UID: \"91672759-be09-404a-b4f4-ccbb995f9209\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869796 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/edb8ea95-7464-414b-8208-03a3a8426d74-apiservice-cert\") pod \"packageserver-d55dfcdfc-7bmzh\" (UID: \"edb8ea95-7464-414b-8208-03a3a8426d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869819 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eb317bb8-7cfa-4865-8390-c7be4460c44b-etcd-client\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869842 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869862 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb317bb8-7cfa-4865-8390-c7be4460c44b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869886 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3147530a-17b5-4388-85a2-4644ff82a31a-config\") pod \"console-operator-58897d9998-cf6d7\" (UID: \"3147530a-17b5-4388-85a2-4644ff82a31a\") " pod="openshift-console-operator/console-operator-58897d9998-cf6d7" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869910 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/69c6afc8-1fa5-44e1-8ba7-bcf6cd506952-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-n8v5w\" (UID: \"69c6afc8-1fa5-44e1-8ba7-bcf6cd506952\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869930 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69c6afc8-1fa5-44e1-8ba7-bcf6cd506952-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-n8v5w\" (UID: \"69c6afc8-1fa5-44e1-8ba7-bcf6cd506952\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.869955 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/627203b8-948f-4881-8398-ecd21cc274f6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zfxq6\" (UID: \"627203b8-948f-4881-8398-ecd21cc274f6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.870055 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4bnt\" (UniqueName: \"kubernetes.io/projected/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-kube-api-access-m4bnt\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.870090 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eb317bb8-7cfa-4865-8390-c7be4460c44b-image-import-ca\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.870112 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9lv4\" (UniqueName: \"kubernetes.io/projected/edb8ea95-7464-414b-8208-03a3a8426d74-kube-api-access-g9lv4\") pod \"packageserver-d55dfcdfc-7bmzh\" (UID: \"edb8ea95-7464-414b-8208-03a3a8426d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.870136 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91672759-be09-404a-b4f4-ccbb995f9209-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qrm4l\" (UID: \"91672759-be09-404a-b4f4-ccbb995f9209\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.870158 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc97m\" (UniqueName: \"kubernetes.io/projected/78c22549-ebe5-4e3a-8805-d180911a3c94-kube-api-access-qc97m\") pod \"olm-operator-6b444d44fb-dj88q\" (UID: \"78c22549-ebe5-4e3a-8805-d180911a3c94\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.870181 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dbls\" (UniqueName: \"kubernetes.io/projected/11abe3c6-6bcb-4453-ba5d-71329e039ccc-kube-api-access-4dbls\") pod \"migrator-59844c95c7-62lng\" (UID: \"11abe3c6-6bcb-4453-ba5d-71329e039ccc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62lng" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.870205 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5f29fce-8bc9-48cb-b808-f55cb2e25c31-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8j65n\" (UID: \"e5f29fce-8bc9-48cb-b808-f55cb2e25c31\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8j65n" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.870432 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-audit-dir\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.871070 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/edb8ea95-7464-414b-8208-03a3a8426d74-tmpfs\") pod \"packageserver-d55dfcdfc-7bmzh\" (UID: \"edb8ea95-7464-414b-8208-03a3a8426d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.871545 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91672759-be09-404a-b4f4-ccbb995f9209-config\") pod \"kube-controller-manager-operator-78b949d7b-qrm4l\" (UID: \"91672759-be09-404a-b4f4-ccbb995f9209\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.871828 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872193 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-trusted-ca-bundle\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872226 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczgx\" (UniqueName: \"kubernetes.io/projected/690deab5-f2ed-4fa4-8191-7bd7a625b924-kube-api-access-rczgx\") pod \"machine-config-controller-84d6567774-zw6c4\" (UID: \"690deab5-f2ed-4fa4-8191-7bd7a625b924\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872257 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njt9f\" (UniqueName: \"kubernetes.io/projected/3147530a-17b5-4388-85a2-4644ff82a31a-kube-api-access-njt9f\") pod \"console-operator-58897d9998-cf6d7\" (UID: \"3147530a-17b5-4388-85a2-4644ff82a31a\") " pod="openshift-console-operator/console-operator-58897d9998-cf6d7" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872278 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eb317bb8-7cfa-4865-8390-c7be4460c44b-audit\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872300 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872338 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e48a8873-082c-40b8-8dca-60190e4772b2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8r6b6\" (UID: \"e48a8873-082c-40b8-8dca-60190e4772b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872360 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/38bc9742-49be-4bb0-924e-2ce0db82ec2e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-75569\" (UID: \"38bc9742-49be-4bb0-924e-2ce0db82ec2e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872385 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872411 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872455 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/edb8ea95-7464-414b-8208-03a3a8426d74-webhook-cert\") pod \"packageserver-d55dfcdfc-7bmzh\" (UID: \"edb8ea95-7464-414b-8208-03a3a8426d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872478 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/627203b8-948f-4881-8398-ecd21cc274f6-config\") pod \"kube-apiserver-operator-766d6c64bb-zfxq6\" (UID: \"627203b8-948f-4881-8398-ecd21cc274f6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872502 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-oauth-serving-cert\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872523 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xp6g\" (UniqueName: \"kubernetes.io/projected/e5f29fce-8bc9-48cb-b808-f55cb2e25c31-kube-api-access-8xp6g\") pod \"control-plane-machine-set-operator-78cbb6b69f-8j65n\" (UID: \"e5f29fce-8bc9-48cb-b808-f55cb2e25c31\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8j65n" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872544 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb317bb8-7cfa-4865-8390-c7be4460c44b-config\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872576 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/78c22549-ebe5-4e3a-8805-d180911a3c94-srv-cert\") pod \"olm-operator-6b444d44fb-dj88q\" (UID: \"78c22549-ebe5-4e3a-8805-d180911a3c94\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872596 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/78c22549-ebe5-4e3a-8805-d180911a3c94-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dj88q\" (UID: \"78c22549-ebe5-4e3a-8805-d180911a3c94\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872625 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872671 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872692 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-service-ca\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872714 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb317bb8-7cfa-4865-8390-c7be4460c44b-node-pullsecrets\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872754 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872776 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e48a8873-082c-40b8-8dca-60190e4772b2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8r6b6\" (UID: \"e48a8873-082c-40b8-8dca-60190e4772b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872801 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb317bb8-7cfa-4865-8390-c7be4460c44b-audit-dir\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872890 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872914 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz4hr\" (UniqueName: \"kubernetes.io/projected/eb317bb8-7cfa-4865-8390-c7be4460c44b-kube-api-access-gz4hr\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872936 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872958 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eb317bb8-7cfa-4865-8390-c7be4460c44b-etcd-serving-ca\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872983 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb317bb8-7cfa-4865-8390-c7be4460c44b-serving-cert\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.873004 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwmp5\" (UniqueName: \"kubernetes.io/projected/69c6afc8-1fa5-44e1-8ba7-bcf6cd506952-kube-api-access-wwmp5\") pod \"cluster-image-registry-operator-dc59b4c8b-n8v5w\" (UID: \"69c6afc8-1fa5-44e1-8ba7-bcf6cd506952\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.873027 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/627203b8-948f-4881-8398-ecd21cc274f6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zfxq6\" (UID: \"627203b8-948f-4881-8398-ecd21cc274f6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.873051 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.873074 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp75m\" (UniqueName: \"kubernetes.io/projected/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-kube-api-access-rp75m\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.873095 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/690deab5-f2ed-4fa4-8191-7bd7a625b924-proxy-tls\") pod \"machine-config-controller-84d6567774-zw6c4\" (UID: \"690deab5-f2ed-4fa4-8191-7bd7a625b924\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.873119 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24s8v\" (UniqueName: \"kubernetes.io/projected/dcabcb2d-1368-4303-b9e8-f7fd269ce1ca-kube-api-access-24s8v\") pod \"downloads-7954f5f757-7rbpj\" (UID: \"dcabcb2d-1368-4303-b9e8-f7fd269ce1ca\") " pod="openshift-console/downloads-7954f5f757-7rbpj" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.874083 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-trusted-ca-bundle\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.874981 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-audit-dir\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.875294 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5bee7657-82f7-4887-82e8-6029f64d52f5-metrics-tls\") pod \"dns-operator-744455d44c-mg6kk\" (UID: \"5bee7657-82f7-4887-82e8-6029f64d52f5\") " pod="openshift-dns-operator/dns-operator-744455d44c-mg6kk" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.875071 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eb317bb8-7cfa-4865-8390-c7be4460c44b-audit\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.875206 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-oauth-serving-cert\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.874901 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3147530a-17b5-4388-85a2-4644ff82a31a-trusted-ca\") pod \"console-operator-58897d9998-cf6d7\" (UID: \"3147530a-17b5-4388-85a2-4644ff82a31a\") " pod="openshift-console-operator/console-operator-58897d9998-cf6d7" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.875697 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.876002 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eb317bb8-7cfa-4865-8390-c7be4460c44b-node-pullsecrets\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.876073 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/38bc9742-49be-4bb0-924e-2ce0db82ec2e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-75569\" (UID: \"38bc9742-49be-4bb0-924e-2ce0db82ec2e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.872765 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-config\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.876459 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb317bb8-7cfa-4865-8390-c7be4460c44b-audit-dir\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.877061 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eb317bb8-7cfa-4865-8390-c7be4460c44b-image-import-ca\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.877756 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.877971 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb317bb8-7cfa-4865-8390-c7be4460c44b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.877982 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eb317bb8-7cfa-4865-8390-c7be4460c44b-etcd-serving-ca\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.878070 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.878242 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-service-ca\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.878381 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/690deab5-f2ed-4fa4-8191-7bd7a625b924-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zw6c4\" (UID: \"690deab5-f2ed-4fa4-8191-7bd7a625b924\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.878472 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3147530a-17b5-4388-85a2-4644ff82a31a-serving-cert\") pod \"console-operator-58897d9998-cf6d7\" (UID: \"3147530a-17b5-4388-85a2-4644ff82a31a\") " pod="openshift-console-operator/console-operator-58897d9998-cf6d7" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.878441 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-oauth-config\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.878881 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb317bb8-7cfa-4865-8390-c7be4460c44b-config\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.878991 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3147530a-17b5-4388-85a2-4644ff82a31a-config\") pod \"console-operator-58897d9998-cf6d7\" (UID: \"3147530a-17b5-4388-85a2-4644ff82a31a\") " pod="openshift-console-operator/console-operator-58897d9998-cf6d7" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.879986 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e48a8873-082c-40b8-8dca-60190e4772b2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-8r6b6\" (UID: \"e48a8873-082c-40b8-8dca-60190e4772b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.880287 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-serving-cert\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.880582 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-audit-policies\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.880597 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.881328 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/edb8ea95-7464-414b-8208-03a3a8426d74-apiservice-cert\") pod \"packageserver-d55dfcdfc-7bmzh\" (UID: \"edb8ea95-7464-414b-8208-03a3a8426d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.881885 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5f29fce-8bc9-48cb-b808-f55cb2e25c31-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8j65n\" (UID: \"e5f29fce-8bc9-48cb-b808-f55cb2e25c31\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8j65n" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.882008 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69c6afc8-1fa5-44e1-8ba7-bcf6cd506952-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-n8v5w\" (UID: \"69c6afc8-1fa5-44e1-8ba7-bcf6cd506952\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.882405 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.882570 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/69c6afc8-1fa5-44e1-8ba7-bcf6cd506952-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-n8v5w\" (UID: \"69c6afc8-1fa5-44e1-8ba7-bcf6cd506952\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.882905 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.883411 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.883682 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38bc9742-49be-4bb0-924e-2ce0db82ec2e-serving-cert\") pod \"openshift-config-operator-7777fb866f-75569\" (UID: \"38bc9742-49be-4bb0-924e-2ce0db82ec2e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.884471 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eb317bb8-7cfa-4865-8390-c7be4460c44b-etcd-client\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.884881 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb317bb8-7cfa-4865-8390-c7be4460c44b-serving-cert\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.885179 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.885198 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/edb8ea95-7464-414b-8208-03a3a8426d74-webhook-cert\") pod \"packageserver-d55dfcdfc-7bmzh\" (UID: \"edb8ea95-7464-414b-8208-03a3a8426d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.885219 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.885758 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.886196 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e48a8873-082c-40b8-8dca-60190e4772b2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-8r6b6\" (UID: \"e48a8873-082c-40b8-8dca-60190e4772b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.886680 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.886773 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eb317bb8-7cfa-4865-8390-c7be4460c44b-encryption-config\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.887731 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.888210 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91672759-be09-404a-b4f4-ccbb995f9209-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qrm4l\" (UID: \"91672759-be09-404a-b4f4-ccbb995f9209\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.893929 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.916393 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.954468 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.958170 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/78c22549-ebe5-4e3a-8805-d180911a3c94-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dj88q\" (UID: \"78c22549-ebe5-4e3a-8805-d180911a3c94\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.973275 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.973895 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/627203b8-948f-4881-8398-ecd21cc274f6-config\") pod \"kube-apiserver-operator-766d6c64bb-zfxq6\" (UID: \"627203b8-948f-4881-8398-ecd21cc274f6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.974021 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/627203b8-948f-4881-8398-ecd21cc274f6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zfxq6\" (UID: \"627203b8-948f-4881-8398-ecd21cc274f6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.974168 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/627203b8-948f-4881-8398-ecd21cc274f6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zfxq6\" (UID: \"627203b8-948f-4881-8398-ecd21cc274f6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.982389 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/690deab5-f2ed-4fa4-8191-7bd7a625b924-proxy-tls\") pod \"machine-config-controller-84d6567774-zw6c4\" (UID: \"690deab5-f2ed-4fa4-8191-7bd7a625b924\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4" Feb 17 08:43:11 crc kubenswrapper[4813]: I0217 08:43:11.995590 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.013416 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.034956 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.053801 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.074456 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.080006 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/78c22549-ebe5-4e3a-8805-d180911a3c94-srv-cert\") pod \"olm-operator-6b444d44fb-dj88q\" (UID: \"78c22549-ebe5-4e3a-8805-d180911a3c94\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.094466 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.114846 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.133805 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.154304 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.174844 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.193815 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.213615 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.234466 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.254823 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.274590 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.294831 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.314423 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.334782 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.354929 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.374400 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.395490 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.414847 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.433875 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.453912 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.474813 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.494886 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.514160 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.534241 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.554938 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.574701 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.586162 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/627203b8-948f-4881-8398-ecd21cc274f6-config\") pod \"kube-apiserver-operator-766d6c64bb-zfxq6\" (UID: \"627203b8-948f-4881-8398-ecd21cc274f6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.595012 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.614596 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.629866 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/627203b8-948f-4881-8398-ecd21cc274f6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zfxq6\" (UID: \"627203b8-948f-4881-8398-ecd21cc274f6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.634368 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.654686 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.674572 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.694944 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.712804 4813 request.go:700] Waited for 1.010518368s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0 Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.715135 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.733927 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.754045 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.775074 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.807010 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.814587 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.835502 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.855103 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.874862 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.894234 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.914047 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.934203 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.954797 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.974095 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 08:43:12 crc kubenswrapper[4813]: I0217 08:43:12.993916 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.014605 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.035613 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.055282 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.074792 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.095270 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.114615 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.134771 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.155060 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.174907 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.207526 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.214971 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.234178 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.254006 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.275459 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.295082 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.314090 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.334489 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.354509 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.374828 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.395500 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.414637 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.462256 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dszt\" (UniqueName: \"kubernetes.io/projected/967e0fab-3a56-4541-b3aa-e626e2c524c1-kube-api-access-2dszt\") pod \"authentication-operator-69f744f599-qxxm8\" (UID: \"967e0fab-3a56-4541-b3aa-e626e2c524c1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.478026 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.487994 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c7dn\" (UniqueName: \"kubernetes.io/projected/17c04723-efa9-4b51-9213-3a22b548b114-kube-api-access-8c7dn\") pod \"machine-approver-56656f9798-7rx4l\" (UID: \"17c04723-efa9-4b51-9213-3a22b548b114\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.496174 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.501271 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2bqf\" (UniqueName: \"kubernetes.io/projected/9f185fcc-0363-430b-a331-0e8ea791f9f6-kube-api-access-f2bqf\") pod \"controller-manager-879f6c89f-fkwkx\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.514968 4813 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.535072 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.583116 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf5x9\" (UniqueName: \"kubernetes.io/projected/9a5230ee-cb95-4c2b-984a-3ed9286f4c45-kube-api-access-qf5x9\") pod \"cluster-samples-operator-665b6dd947-9kvp9\" (UID: \"9a5230ee-cb95-4c2b-984a-3ed9286f4c45\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9kvp9" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.612660 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnggm\" (UniqueName: \"kubernetes.io/projected/712b5668-146d-4a1f-a86d-fb65b56697b7-kube-api-access-xnggm\") pod \"openshift-apiserver-operator-796bbdcf4f-hzjbh\" (UID: \"712b5668-146d-4a1f-a86d-fb65b56697b7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.623224 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9kdz\" (UniqueName: \"kubernetes.io/projected/1bd1ba13-d69b-400b-8337-ad2938a9452d-kube-api-access-z9kdz\") pod \"apiserver-7bbb656c7d-fkd2j\" (UID: \"1bd1ba13-d69b-400b-8337-ad2938a9452d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.634420 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zcdq\" (UniqueName: \"kubernetes.io/projected/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-kube-api-access-2zcdq\") pod \"route-controller-manager-6576b87f9c-42mx8\" (UID: \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.641680 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.660873 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q2cr\" (UniqueName: \"kubernetes.io/projected/bf4b8a5c-06c8-4206-852c-3e58e2e35bca-kube-api-access-4q2cr\") pod \"machine-api-operator-5694c8668f-gjzhl\" (UID: \"bf4b8a5c-06c8-4206-852c-3e58e2e35bca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.669963 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.674956 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.696044 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.701700 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.704806 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.714973 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.723727 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" Feb 17 08:43:13 crc kubenswrapper[4813]: W0217 08:43:13.729998 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17c04723_efa9_4b51_9213_3a22b548b114.slice/crio-6c3cc0cf574f2cda3f6573d3c35fa58c5f813bd56e11e2308dd34b324c34a6fd WatchSource:0}: Error finding container 6c3cc0cf574f2cda3f6573d3c35fa58c5f813bd56e11e2308dd34b324c34a6fd: Status 404 returned error can't find the container with id 6c3cc0cf574f2cda3f6573d3c35fa58c5f813bd56e11e2308dd34b324c34a6fd Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.732075 4813 request.go:700] Waited for 1.862768379s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.751168 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rj2b\" (UniqueName: \"kubernetes.io/projected/5bee7657-82f7-4887-82e8-6029f64d52f5-kube-api-access-7rj2b\") pod \"dns-operator-744455d44c-mg6kk\" (UID: \"5bee7657-82f7-4887-82e8-6029f64d52f5\") " pod="openshift-dns-operator/dns-operator-744455d44c-mg6kk" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.769660 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69c6afc8-1fa5-44e1-8ba7-bcf6cd506952-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-n8v5w\" (UID: \"69c6afc8-1fa5-44e1-8ba7-bcf6cd506952\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.779139 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.792431 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9kvp9" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.792992 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcfgd\" (UniqueName: \"kubernetes.io/projected/e48a8873-082c-40b8-8dca-60190e4772b2-kube-api-access-dcfgd\") pod \"openshift-controller-manager-operator-756b6f6bc6-8r6b6\" (UID: \"e48a8873-082c-40b8-8dca-60190e4772b2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.828003 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjzt5\" (UniqueName: \"kubernetes.io/projected/38bc9742-49be-4bb0-924e-2ce0db82ec2e-kube-api-access-gjzt5\") pod \"openshift-config-operator-7777fb866f-75569\" (UID: \"38bc9742-49be-4bb0-924e-2ce0db82ec2e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.830622 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24s8v\" (UniqueName: \"kubernetes.io/projected/dcabcb2d-1368-4303-b9e8-f7fd269ce1ca-kube-api-access-24s8v\") pod \"downloads-7954f5f757-7rbpj\" (UID: \"dcabcb2d-1368-4303-b9e8-f7fd269ce1ca\") " pod="openshift-console/downloads-7954f5f757-7rbpj" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.832298 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.848844 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.856829 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mg6kk" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.857900 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczgx\" (UniqueName: \"kubernetes.io/projected/690deab5-f2ed-4fa4-8191-7bd7a625b924-kube-api-access-rczgx\") pod \"machine-config-controller-84d6567774-zw6c4\" (UID: \"690deab5-f2ed-4fa4-8191-7bd7a625b924\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.870045 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njt9f\" (UniqueName: \"kubernetes.io/projected/3147530a-17b5-4388-85a2-4644ff82a31a-kube-api-access-njt9f\") pod \"console-operator-58897d9998-cf6d7\" (UID: \"3147530a-17b5-4388-85a2-4644ff82a31a\") " pod="openshift-console-operator/console-operator-58897d9998-cf6d7" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.902605 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4bnt\" (UniqueName: \"kubernetes.io/projected/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-kube-api-access-m4bnt\") pod \"oauth-openshift-558db77b4-29bxl\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.913972 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j"] Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.915458 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9lv4\" (UniqueName: \"kubernetes.io/projected/edb8ea95-7464-414b-8208-03a3a8426d74-kube-api-access-g9lv4\") pod \"packageserver-d55dfcdfc-7bmzh\" (UID: \"edb8ea95-7464-414b-8208-03a3a8426d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.924160 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.925105 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fkwkx"] Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.927078 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" event={"ID":"17c04723-efa9-4b51-9213-3a22b548b114","Type":"ContainerStarted","Data":"6c3cc0cf574f2cda3f6573d3c35fa58c5f813bd56e11e2308dd34b324c34a6fd"} Feb 17 08:43:13 crc kubenswrapper[4813]: W0217 08:43:13.929012 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bd1ba13_d69b_400b_8337_ad2938a9452d.slice/crio-3b1651726c6301616e9b80e989e42bd3786d202fe5e1d1840d4bd1d15ca05393 WatchSource:0}: Error finding container 3b1651726c6301616e9b80e989e42bd3786d202fe5e1d1840d4bd1d15ca05393: Status 404 returned error can't find the container with id 3b1651726c6301616e9b80e989e42bd3786d202fe5e1d1840d4bd1d15ca05393 Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.936524 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91672759-be09-404a-b4f4-ccbb995f9209-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qrm4l\" (UID: \"91672759-be09-404a-b4f4-ccbb995f9209\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.941885 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qxxm8"] Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.946921 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc97m\" (UniqueName: \"kubernetes.io/projected/78c22549-ebe5-4e3a-8805-d180911a3c94-kube-api-access-qc97m\") pod \"olm-operator-6b444d44fb-dj88q\" (UID: \"78c22549-ebe5-4e3a-8805-d180911a3c94\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q" Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.960271 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gjzhl"] Feb 17 08:43:13 crc kubenswrapper[4813]: I0217 08:43:13.970931 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dbls\" (UniqueName: \"kubernetes.io/projected/11abe3c6-6bcb-4453-ba5d-71329e039ccc-kube-api-access-4dbls\") pod \"migrator-59844c95c7-62lng\" (UID: \"11abe3c6-6bcb-4453-ba5d-71329e039ccc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62lng" Feb 17 08:43:13 crc kubenswrapper[4813]: W0217 08:43:13.983271 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f185fcc_0363_430b_a331_0e8ea791f9f6.slice/crio-f1e91e25c895a73029f1085755deb5d9e7366ec10c742c425a1b07b9ba7b6139 WatchSource:0}: Error finding container f1e91e25c895a73029f1085755deb5d9e7366ec10c742c425a1b07b9ba7b6139: Status 404 returned error can't find the container with id f1e91e25c895a73029f1085755deb5d9e7366ec10c742c425a1b07b9ba7b6139 Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.000104 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp75m\" (UniqueName: \"kubernetes.io/projected/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-kube-api-access-rp75m\") pod \"console-f9d7485db-l2l9m\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.008477 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8"] Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.012788 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xp6g\" (UniqueName: \"kubernetes.io/projected/e5f29fce-8bc9-48cb-b808-f55cb2e25c31-kube-api-access-8xp6g\") pod \"control-plane-machine-set-operator-78cbb6b69f-8j65n\" (UID: \"e5f29fce-8bc9-48cb-b808-f55cb2e25c31\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8j65n" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.033060 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwmp5\" (UniqueName: \"kubernetes.io/projected/69c6afc8-1fa5-44e1-8ba7-bcf6cd506952-kube-api-access-wwmp5\") pod \"cluster-image-registry-operator-dc59b4c8b-n8v5w\" (UID: \"69c6afc8-1fa5-44e1-8ba7-bcf6cd506952\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.041210 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9kvp9"] Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.051808 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz4hr\" (UniqueName: \"kubernetes.io/projected/eb317bb8-7cfa-4865-8390-c7be4460c44b-kube-api-access-gz4hr\") pod \"apiserver-76f77b778f-x222k\" (UID: \"eb317bb8-7cfa-4865-8390-c7be4460c44b\") " pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.091495 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/627203b8-948f-4881-8398-ecd21cc274f6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zfxq6\" (UID: \"627203b8-948f-4881-8398-ecd21cc274f6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.093746 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh"] Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.101053 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.105916 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a507abb0-fee7-454a-bfb9-7d4e3e31bf56-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hgx9k\" (UID: \"a507abb0-fee7-454a-bfb9-7d4e3e31bf56\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106021 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d3c0419-3831-4d6b-ada5-cc6a73f8a176-metrics-certs\") pod \"router-default-5444994796-z2wgd\" (UID: \"1d3c0419-3831-4d6b-ada5-cc6a73f8a176\") " pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106055 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37375f80-f004-4621-b863-326c6e296435-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-whhpn\" (UID: \"37375f80-f004-4621-b863-326c6e296435\") " pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106115 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/088350fe-751e-41a8-8931-784e2a419e22-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lsz9b\" (UID: \"088350fe-751e-41a8-8931-784e2a419e22\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106181 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d8b72e72-ec67-4394-9d76-d7cfb15566ed-csi-data-dir\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106259 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fctw8\" (UniqueName: \"kubernetes.io/projected/b23fc308-85e3-440a-b2d0-5895fe8b79a9-kube-api-access-fctw8\") pod \"service-ca-9c57cc56f-lbvg5\" (UID: \"b23fc308-85e3-440a-b2d0-5895fe8b79a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbvg5" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106291 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-962d4\" (UniqueName: \"kubernetes.io/projected/1d3c0419-3831-4d6b-ada5-cc6a73f8a176-kube-api-access-962d4\") pod \"router-default-5444994796-z2wgd\" (UID: \"1d3c0419-3831-4d6b-ada5-cc6a73f8a176\") " pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106347 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1d3c0419-3831-4d6b-ada5-cc6a73f8a176-stats-auth\") pod \"router-default-5444994796-z2wgd\" (UID: \"1d3c0419-3831-4d6b-ada5-cc6a73f8a176\") " pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106363 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e17087d-b6e0-4f2c-85b6-ca20ba2fe561-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kkhmc\" (UID: \"6e17087d-b6e0-4f2c-85b6-ca20ba2fe561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106426 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd7nq\" (UniqueName: \"kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-kube-api-access-xd7nq\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106459 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/088350fe-751e-41a8-8931-784e2a419e22-images\") pod \"machine-config-operator-74547568cd-lsz9b\" (UID: \"088350fe-751e-41a8-8931-784e2a419e22\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106491 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/89c4198d-192e-4533-9c95-27523486a3fa-node-bootstrap-token\") pod \"machine-config-server-55zl9\" (UID: \"89c4198d-192e-4533-9c95-27523486a3fa\") " pod="openshift-machine-config-operator/machine-config-server-55zl9" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106505 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e17087d-b6e0-4f2c-85b6-ca20ba2fe561-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kkhmc\" (UID: \"6e17087d-b6e0-4f2c-85b6-ca20ba2fe561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106573 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca47b2a4-9965-4fa6-8b80-1e21931f0860-serving-cert\") pod \"service-ca-operator-777779d784-77nlz\" (UID: \"ca47b2a4-9965-4fa6-8b80-1e21931f0860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-77nlz" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106609 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdncd\" (UniqueName: \"kubernetes.io/projected/d8b72e72-ec67-4394-9d76-d7cfb15566ed-kube-api-access-hdncd\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106648 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n46v\" (UniqueName: \"kubernetes.io/projected/d34411f8-63ef-4455-a059-992ecf841688-kube-api-access-7n46v\") pod \"catalog-operator-68c6474976-szxq2\" (UID: \"d34411f8-63ef-4455-a059-992ecf841688\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106715 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82xxw\" (UniqueName: \"kubernetes.io/projected/b0a696d4-3301-4fd5-9d70-efa790fbce35-kube-api-access-82xxw\") pod \"collect-profiles-29521950-jtvxc\" (UID: \"b0a696d4-3301-4fd5-9d70-efa790fbce35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106736 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/75c49bc8-43a0-46ea-a9ed-ed22f124ea3c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lvbmn\" (UID: \"75c49bc8-43a0-46ea-a9ed-ed22f124ea3c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lvbmn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106753 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9kkp\" (UniqueName: \"kubernetes.io/projected/ca47b2a4-9965-4fa6-8b80-1e21931f0860-kube-api-access-b9kkp\") pod \"service-ca-operator-777779d784-77nlz\" (UID: \"ca47b2a4-9965-4fa6-8b80-1e21931f0860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-77nlz" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106769 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c3fac74-1086-4222-ad88-6c230d11c667-metrics-tls\") pod \"ingress-operator-5b745b69d9-7r2sv\" (UID: \"4c3fac74-1086-4222-ad88-6c230d11c667\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106798 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d3c0419-3831-4d6b-ada5-cc6a73f8a176-service-ca-bundle\") pod \"router-default-5444994796-z2wgd\" (UID: \"1d3c0419-3831-4d6b-ada5-cc6a73f8a176\") " pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106814 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-bound-sa-token\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106830 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv6dd\" (UniqueName: \"kubernetes.io/projected/89c4198d-192e-4533-9c95-27523486a3fa-kube-api-access-bv6dd\") pod \"machine-config-server-55zl9\" (UID: \"89c4198d-192e-4533-9c95-27523486a3fa\") " pod="openshift-machine-config-operator/machine-config-server-55zl9" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106881 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37375f80-f004-4621-b863-326c6e296435-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-whhpn\" (UID: \"37375f80-f004-4621-b863-326c6e296435\") " pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106907 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1d3c0419-3831-4d6b-ada5-cc6a73f8a176-default-certificate\") pod \"router-default-5444994796-z2wgd\" (UID: \"1d3c0419-3831-4d6b-ada5-cc6a73f8a176\") " pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.106981 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c3fac74-1086-4222-ad88-6c230d11c667-trusted-ca\") pod \"ingress-operator-5b745b69d9-7r2sv\" (UID: \"4c3fac74-1086-4222-ad88-6c230d11c667\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.107052 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-config\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.107126 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c3fac74-1086-4222-ad88-6c230d11c667-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7r2sv\" (UID: \"4c3fac74-1086-4222-ad88-6c230d11c667\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.107170 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff6zk\" (UniqueName: \"kubernetes.io/projected/37375f80-f004-4621-b863-326c6e296435-kube-api-access-ff6zk\") pod \"marketplace-operator-79b997595-whhpn\" (UID: \"37375f80-f004-4621-b863-326c6e296435\") " pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.107411 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/088350fe-751e-41a8-8931-784e2a419e22-proxy-tls\") pod \"machine-config-operator-74547568cd-lsz9b\" (UID: \"088350fe-751e-41a8-8931-784e2a419e22\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.107465 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwn5m\" (UniqueName: \"kubernetes.io/projected/4c3fac74-1086-4222-ad88-6c230d11c667-kube-api-access-bwn5m\") pod \"ingress-operator-5b745b69d9-7r2sv\" (UID: \"4c3fac74-1086-4222-ad88-6c230d11c667\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.107501 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d34411f8-63ef-4455-a059-992ecf841688-srv-cert\") pod \"catalog-operator-68c6474976-szxq2\" (UID: \"d34411f8-63ef-4455-a059-992ecf841688\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.107520 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdn98\" (UniqueName: \"kubernetes.io/projected/088350fe-751e-41a8-8931-784e2a419e22-kube-api-access-fdn98\") pod \"machine-config-operator-74547568cd-lsz9b\" (UID: \"088350fe-751e-41a8-8931-784e2a419e22\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.107539 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2qqv\" (UniqueName: \"kubernetes.io/projected/6e17087d-b6e0-4f2c-85b6-ca20ba2fe561-kube-api-access-k2qqv\") pod \"kube-storage-version-migrator-operator-b67b599dd-kkhmc\" (UID: \"6e17087d-b6e0-4f2c-85b6-ca20ba2fe561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.107713 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a6fdaa1-fc85-4de1-b32e-3a9c98834e1f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nx689\" (UID: \"4a6fdaa1-fc85-4de1-b32e-3a9c98834e1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.107775 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b614c49-66c0-41c7-bec2-c657495f1a2c-cert\") pod \"ingress-canary-86t4c\" (UID: \"3b614c49-66c0-41c7-bec2-c657495f1a2c\") " pod="openshift-ingress-canary/ingress-canary-86t4c" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.107796 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-serving-cert\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.107858 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a507abb0-fee7-454a-bfb9-7d4e3e31bf56-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hgx9k\" (UID: \"a507abb0-fee7-454a-bfb9-7d4e3e31bf56\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.107876 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs298\" (UniqueName: \"kubernetes.io/projected/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-kube-api-access-bs298\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.107956 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/89c4198d-192e-4533-9c95-27523486a3fa-certs\") pod \"machine-config-server-55zl9\" (UID: \"89c4198d-192e-4533-9c95-27523486a3fa\") " pod="openshift-machine-config-operator/machine-config-server-55zl9" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.107977 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41699b82-4fbd-4bc2-a45c-6971618962df-trusted-ca\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.107994 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d8b72e72-ec67-4394-9d76-d7cfb15566ed-socket-dir\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.108049 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.108066 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b23fc308-85e3-440a-b2d0-5895fe8b79a9-signing-key\") pod \"service-ca-9c57cc56f-lbvg5\" (UID: \"b23fc308-85e3-440a-b2d0-5895fe8b79a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbvg5" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.108191 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/41699b82-4fbd-4bc2-a45c-6971618962df-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.108239 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r6j2\" (UniqueName: \"kubernetes.io/projected/75c49bc8-43a0-46ea-a9ed-ed22f124ea3c-kube-api-access-4r6j2\") pod \"multus-admission-controller-857f4d67dd-lvbmn\" (UID: \"75c49bc8-43a0-46ea-a9ed-ed22f124ea3c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lvbmn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.108279 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0a696d4-3301-4fd5-9d70-efa790fbce35-config-volume\") pod \"collect-profiles-29521950-jtvxc\" (UID: \"b0a696d4-3301-4fd5-9d70-efa790fbce35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" Feb 17 08:43:14 crc kubenswrapper[4813]: E0217 08:43:14.109121 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:14.609111054 +0000 UTC m=+142.269872277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.109992 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca47b2a4-9965-4fa6-8b80-1e21931f0860-config\") pod \"service-ca-operator-777779d784-77nlz\" (UID: \"ca47b2a4-9965-4fa6-8b80-1e21931f0860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-77nlz" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.110193 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d8b72e72-ec67-4394-9d76-d7cfb15566ed-plugins-dir\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.110365 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-etcd-client\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.110403 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/41699b82-4fbd-4bc2-a45c-6971618962df-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.110431 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a507abb0-fee7-454a-bfb9-7d4e3e31bf56-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hgx9k\" (UID: \"a507abb0-fee7-454a-bfb9-7d4e3e31bf56\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.110500 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-7rbpj" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.110668 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0a696d4-3301-4fd5-9d70-efa790fbce35-secret-volume\") pod \"collect-profiles-29521950-jtvxc\" (UID: \"b0a696d4-3301-4fd5-9d70-efa790fbce35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.110934 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d8b72e72-ec67-4394-9d76-d7cfb15566ed-mountpoint-dir\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.111492 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl2gg\" (UniqueName: \"kubernetes.io/projected/4a6fdaa1-fc85-4de1-b32e-3a9c98834e1f-kube-api-access-zl2gg\") pod \"package-server-manager-789f6589d5-nx689\" (UID: \"4a6fdaa1-fc85-4de1-b32e-3a9c98834e1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.111580 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/41699b82-4fbd-4bc2-a45c-6971618962df-registry-certificates\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.112568 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb6bc\" (UniqueName: \"kubernetes.io/projected/3b614c49-66c0-41c7-bec2-c657495f1a2c-kube-api-access-rb6bc\") pod \"ingress-canary-86t4c\" (UID: \"3b614c49-66c0-41c7-bec2-c657495f1a2c\") " pod="openshift-ingress-canary/ingress-canary-86t4c" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.112593 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-etcd-service-ca\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.112632 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b23fc308-85e3-440a-b2d0-5895fe8b79a9-signing-cabundle\") pod \"service-ca-9c57cc56f-lbvg5\" (UID: \"b23fc308-85e3-440a-b2d0-5895fe8b79a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbvg5" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.112680 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d8b72e72-ec67-4394-9d76-d7cfb15566ed-registration-dir\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.113874 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d34411f8-63ef-4455-a059-992ecf841688-profile-collector-cert\") pod \"catalog-operator-68c6474976-szxq2\" (UID: \"d34411f8-63ef-4455-a059-992ecf841688\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.114425 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-registry-tls\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.114480 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-etcd-ca\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.127716 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.140629 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-cf6d7" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.165788 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.195436 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.199750 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.204492 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8j65n" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.212189 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.216784 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:14 crc kubenswrapper[4813]: E0217 08:43:14.216955 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:14.716928745 +0000 UTC m=+142.377689968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217001 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e17087d-b6e0-4f2c-85b6-ca20ba2fe561-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kkhmc\" (UID: \"6e17087d-b6e0-4f2c-85b6-ca20ba2fe561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217042 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd7nq\" (UniqueName: \"kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-kube-api-access-xd7nq\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217068 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/088350fe-751e-41a8-8931-784e2a419e22-images\") pod \"machine-config-operator-74547568cd-lsz9b\" (UID: \"088350fe-751e-41a8-8931-784e2a419e22\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217085 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/89c4198d-192e-4533-9c95-27523486a3fa-node-bootstrap-token\") pod \"machine-config-server-55zl9\" (UID: \"89c4198d-192e-4533-9c95-27523486a3fa\") " pod="openshift-machine-config-operator/machine-config-server-55zl9" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217100 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e17087d-b6e0-4f2c-85b6-ca20ba2fe561-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kkhmc\" (UID: \"6e17087d-b6e0-4f2c-85b6-ca20ba2fe561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217126 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca47b2a4-9965-4fa6-8b80-1e21931f0860-serving-cert\") pod \"service-ca-operator-777779d784-77nlz\" (UID: \"ca47b2a4-9965-4fa6-8b80-1e21931f0860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-77nlz" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217149 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxb95\" (UniqueName: \"kubernetes.io/projected/b8173c4a-4d8e-4a69-b60f-56807f886bbf-kube-api-access-mxb95\") pod \"dns-default-lmkjb\" (UID: \"b8173c4a-4d8e-4a69-b60f-56807f886bbf\") " pod="openshift-dns/dns-default-lmkjb" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217170 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdncd\" (UniqueName: \"kubernetes.io/projected/d8b72e72-ec67-4394-9d76-d7cfb15566ed-kube-api-access-hdncd\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217188 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n46v\" (UniqueName: \"kubernetes.io/projected/d34411f8-63ef-4455-a059-992ecf841688-kube-api-access-7n46v\") pod \"catalog-operator-68c6474976-szxq2\" (UID: \"d34411f8-63ef-4455-a059-992ecf841688\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217210 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82xxw\" (UniqueName: \"kubernetes.io/projected/b0a696d4-3301-4fd5-9d70-efa790fbce35-kube-api-access-82xxw\") pod \"collect-profiles-29521950-jtvxc\" (UID: \"b0a696d4-3301-4fd5-9d70-efa790fbce35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217234 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/75c49bc8-43a0-46ea-a9ed-ed22f124ea3c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lvbmn\" (UID: \"75c49bc8-43a0-46ea-a9ed-ed22f124ea3c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lvbmn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217256 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9kkp\" (UniqueName: \"kubernetes.io/projected/ca47b2a4-9965-4fa6-8b80-1e21931f0860-kube-api-access-b9kkp\") pod \"service-ca-operator-777779d784-77nlz\" (UID: \"ca47b2a4-9965-4fa6-8b80-1e21931f0860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-77nlz" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217275 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c3fac74-1086-4222-ad88-6c230d11c667-metrics-tls\") pod \"ingress-operator-5b745b69d9-7r2sv\" (UID: \"4c3fac74-1086-4222-ad88-6c230d11c667\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217296 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d3c0419-3831-4d6b-ada5-cc6a73f8a176-service-ca-bundle\") pod \"router-default-5444994796-z2wgd\" (UID: \"1d3c0419-3831-4d6b-ada5-cc6a73f8a176\") " pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217365 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-bound-sa-token\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217383 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv6dd\" (UniqueName: \"kubernetes.io/projected/89c4198d-192e-4533-9c95-27523486a3fa-kube-api-access-bv6dd\") pod \"machine-config-server-55zl9\" (UID: \"89c4198d-192e-4533-9c95-27523486a3fa\") " pod="openshift-machine-config-operator/machine-config-server-55zl9" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217403 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37375f80-f004-4621-b863-326c6e296435-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-whhpn\" (UID: \"37375f80-f004-4621-b863-326c6e296435\") " pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217422 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1d3c0419-3831-4d6b-ada5-cc6a73f8a176-default-certificate\") pod \"router-default-5444994796-z2wgd\" (UID: \"1d3c0419-3831-4d6b-ada5-cc6a73f8a176\") " pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217441 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c3fac74-1086-4222-ad88-6c230d11c667-trusted-ca\") pod \"ingress-operator-5b745b69d9-7r2sv\" (UID: \"4c3fac74-1086-4222-ad88-6c230d11c667\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217518 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-config\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217546 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c3fac74-1086-4222-ad88-6c230d11c667-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7r2sv\" (UID: \"4c3fac74-1086-4222-ad88-6c230d11c667\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217568 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff6zk\" (UniqueName: \"kubernetes.io/projected/37375f80-f004-4621-b863-326c6e296435-kube-api-access-ff6zk\") pod \"marketplace-operator-79b997595-whhpn\" (UID: \"37375f80-f004-4621-b863-326c6e296435\") " pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217597 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8173c4a-4d8e-4a69-b60f-56807f886bbf-config-volume\") pod \"dns-default-lmkjb\" (UID: \"b8173c4a-4d8e-4a69-b60f-56807f886bbf\") " pod="openshift-dns/dns-default-lmkjb" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217624 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/088350fe-751e-41a8-8931-784e2a419e22-proxy-tls\") pod \"machine-config-operator-74547568cd-lsz9b\" (UID: \"088350fe-751e-41a8-8931-784e2a419e22\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217671 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwn5m\" (UniqueName: \"kubernetes.io/projected/4c3fac74-1086-4222-ad88-6c230d11c667-kube-api-access-bwn5m\") pod \"ingress-operator-5b745b69d9-7r2sv\" (UID: \"4c3fac74-1086-4222-ad88-6c230d11c667\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217705 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d34411f8-63ef-4455-a059-992ecf841688-srv-cert\") pod \"catalog-operator-68c6474976-szxq2\" (UID: \"d34411f8-63ef-4455-a059-992ecf841688\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217729 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdn98\" (UniqueName: \"kubernetes.io/projected/088350fe-751e-41a8-8931-784e2a419e22-kube-api-access-fdn98\") pod \"machine-config-operator-74547568cd-lsz9b\" (UID: \"088350fe-751e-41a8-8931-784e2a419e22\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217750 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2qqv\" (UniqueName: \"kubernetes.io/projected/6e17087d-b6e0-4f2c-85b6-ca20ba2fe561-kube-api-access-k2qqv\") pod \"kube-storage-version-migrator-operator-b67b599dd-kkhmc\" (UID: \"6e17087d-b6e0-4f2c-85b6-ca20ba2fe561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217778 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a6fdaa1-fc85-4de1-b32e-3a9c98834e1f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nx689\" (UID: \"4a6fdaa1-fc85-4de1-b32e-3a9c98834e1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217805 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b614c49-66c0-41c7-bec2-c657495f1a2c-cert\") pod \"ingress-canary-86t4c\" (UID: \"3b614c49-66c0-41c7-bec2-c657495f1a2c\") " pod="openshift-ingress-canary/ingress-canary-86t4c" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217826 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-serving-cert\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217851 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a507abb0-fee7-454a-bfb9-7d4e3e31bf56-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hgx9k\" (UID: \"a507abb0-fee7-454a-bfb9-7d4e3e31bf56\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217868 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs298\" (UniqueName: \"kubernetes.io/projected/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-kube-api-access-bs298\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217889 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/89c4198d-192e-4533-9c95-27523486a3fa-certs\") pod \"machine-config-server-55zl9\" (UID: \"89c4198d-192e-4533-9c95-27523486a3fa\") " pod="openshift-machine-config-operator/machine-config-server-55zl9" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217908 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41699b82-4fbd-4bc2-a45c-6971618962df-trusted-ca\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217928 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d8b72e72-ec67-4394-9d76-d7cfb15566ed-socket-dir\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217973 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.217998 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b23fc308-85e3-440a-b2d0-5895fe8b79a9-signing-key\") pod \"service-ca-9c57cc56f-lbvg5\" (UID: \"b23fc308-85e3-440a-b2d0-5895fe8b79a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbvg5" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218065 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/41699b82-4fbd-4bc2-a45c-6971618962df-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218096 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r6j2\" (UniqueName: \"kubernetes.io/projected/75c49bc8-43a0-46ea-a9ed-ed22f124ea3c-kube-api-access-4r6j2\") pod \"multus-admission-controller-857f4d67dd-lvbmn\" (UID: \"75c49bc8-43a0-46ea-a9ed-ed22f124ea3c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lvbmn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218121 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0a696d4-3301-4fd5-9d70-efa790fbce35-config-volume\") pod \"collect-profiles-29521950-jtvxc\" (UID: \"b0a696d4-3301-4fd5-9d70-efa790fbce35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218163 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca47b2a4-9965-4fa6-8b80-1e21931f0860-config\") pod \"service-ca-operator-777779d784-77nlz\" (UID: \"ca47b2a4-9965-4fa6-8b80-1e21931f0860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-77nlz" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218190 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d8b72e72-ec67-4394-9d76-d7cfb15566ed-plugins-dir\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218226 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-etcd-client\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218247 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/41699b82-4fbd-4bc2-a45c-6971618962df-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218270 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a507abb0-fee7-454a-bfb9-7d4e3e31bf56-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hgx9k\" (UID: \"a507abb0-fee7-454a-bfb9-7d4e3e31bf56\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218288 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0a696d4-3301-4fd5-9d70-efa790fbce35-secret-volume\") pod \"collect-profiles-29521950-jtvxc\" (UID: \"b0a696d4-3301-4fd5-9d70-efa790fbce35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218327 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d8b72e72-ec67-4394-9d76-d7cfb15566ed-mountpoint-dir\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218346 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl2gg\" (UniqueName: \"kubernetes.io/projected/4a6fdaa1-fc85-4de1-b32e-3a9c98834e1f-kube-api-access-zl2gg\") pod \"package-server-manager-789f6589d5-nx689\" (UID: \"4a6fdaa1-fc85-4de1-b32e-3a9c98834e1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218365 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/41699b82-4fbd-4bc2-a45c-6971618962df-registry-certificates\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218393 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6bc\" (UniqueName: \"kubernetes.io/projected/3b614c49-66c0-41c7-bec2-c657495f1a2c-kube-api-access-rb6bc\") pod \"ingress-canary-86t4c\" (UID: \"3b614c49-66c0-41c7-bec2-c657495f1a2c\") " pod="openshift-ingress-canary/ingress-canary-86t4c" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218409 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-etcd-service-ca\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218424 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b23fc308-85e3-440a-b2d0-5895fe8b79a9-signing-cabundle\") pod \"service-ca-9c57cc56f-lbvg5\" (UID: \"b23fc308-85e3-440a-b2d0-5895fe8b79a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbvg5" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218440 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d8b72e72-ec67-4394-9d76-d7cfb15566ed-registration-dir\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218461 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b8173c4a-4d8e-4a69-b60f-56807f886bbf-metrics-tls\") pod \"dns-default-lmkjb\" (UID: \"b8173c4a-4d8e-4a69-b60f-56807f886bbf\") " pod="openshift-dns/dns-default-lmkjb" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218487 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d34411f8-63ef-4455-a059-992ecf841688-profile-collector-cert\") pod \"catalog-operator-68c6474976-szxq2\" (UID: \"d34411f8-63ef-4455-a059-992ecf841688\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218508 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-registry-tls\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218523 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-etcd-ca\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218540 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a507abb0-fee7-454a-bfb9-7d4e3e31bf56-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hgx9k\" (UID: \"a507abb0-fee7-454a-bfb9-7d4e3e31bf56\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218567 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d3c0419-3831-4d6b-ada5-cc6a73f8a176-metrics-certs\") pod \"router-default-5444994796-z2wgd\" (UID: \"1d3c0419-3831-4d6b-ada5-cc6a73f8a176\") " pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218582 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37375f80-f004-4621-b863-326c6e296435-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-whhpn\" (UID: \"37375f80-f004-4621-b863-326c6e296435\") " pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218610 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/088350fe-751e-41a8-8931-784e2a419e22-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lsz9b\" (UID: \"088350fe-751e-41a8-8931-784e2a419e22\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218636 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d8b72e72-ec67-4394-9d76-d7cfb15566ed-csi-data-dir\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218654 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fctw8\" (UniqueName: \"kubernetes.io/projected/b23fc308-85e3-440a-b2d0-5895fe8b79a9-kube-api-access-fctw8\") pod \"service-ca-9c57cc56f-lbvg5\" (UID: \"b23fc308-85e3-440a-b2d0-5895fe8b79a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbvg5" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218668 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-962d4\" (UniqueName: \"kubernetes.io/projected/1d3c0419-3831-4d6b-ada5-cc6a73f8a176-kube-api-access-962d4\") pod \"router-default-5444994796-z2wgd\" (UID: \"1d3c0419-3831-4d6b-ada5-cc6a73f8a176\") " pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.218685 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1d3c0419-3831-4d6b-ada5-cc6a73f8a176-stats-auth\") pod \"router-default-5444994796-z2wgd\" (UID: \"1d3c0419-3831-4d6b-ada5-cc6a73f8a176\") " pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.221163 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.221403 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1d3c0419-3831-4d6b-ada5-cc6a73f8a176-stats-auth\") pod \"router-default-5444994796-z2wgd\" (UID: \"1d3c0419-3831-4d6b-ada5-cc6a73f8a176\") " pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.221650 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d8b72e72-ec67-4394-9d76-d7cfb15566ed-plugins-dir\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.221966 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-config\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.222355 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e17087d-b6e0-4f2c-85b6-ca20ba2fe561-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kkhmc\" (UID: \"6e17087d-b6e0-4f2c-85b6-ca20ba2fe561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.223353 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/41699b82-4fbd-4bc2-a45c-6971618962df-registry-certificates\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.223637 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37375f80-f004-4621-b863-326c6e296435-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-whhpn\" (UID: \"37375f80-f004-4621-b863-326c6e296435\") " pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.223752 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d3c0419-3831-4d6b-ada5-cc6a73f8a176-service-ca-bundle\") pod \"router-default-5444994796-z2wgd\" (UID: \"1d3c0419-3831-4d6b-ada5-cc6a73f8a176\") " pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.224613 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/088350fe-751e-41a8-8931-784e2a419e22-images\") pod \"machine-config-operator-74547568cd-lsz9b\" (UID: \"088350fe-751e-41a8-8931-784e2a419e22\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.224783 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/41699b82-4fbd-4bc2-a45c-6971618962df-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.224812 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca47b2a4-9965-4fa6-8b80-1e21931f0860-serving-cert\") pod \"service-ca-operator-777779d784-77nlz\" (UID: \"ca47b2a4-9965-4fa6-8b80-1e21931f0860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-77nlz" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.224887 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-etcd-service-ca\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.224949 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d8b72e72-ec67-4394-9d76-d7cfb15566ed-registration-dir\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.225976 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b23fc308-85e3-440a-b2d0-5895fe8b79a9-signing-cabundle\") pod \"service-ca-9c57cc56f-lbvg5\" (UID: \"b23fc308-85e3-440a-b2d0-5895fe8b79a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbvg5" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.226446 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a507abb0-fee7-454a-bfb9-7d4e3e31bf56-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hgx9k\" (UID: \"a507abb0-fee7-454a-bfb9-7d4e3e31bf56\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.227176 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a507abb0-fee7-454a-bfb9-7d4e3e31bf56-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hgx9k\" (UID: \"a507abb0-fee7-454a-bfb9-7d4e3e31bf56\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.228093 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/088350fe-751e-41a8-8931-784e2a419e22-proxy-tls\") pod \"machine-config-operator-74547568cd-lsz9b\" (UID: \"088350fe-751e-41a8-8931-784e2a419e22\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.229834 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c3fac74-1086-4222-ad88-6c230d11c667-trusted-ca\") pod \"ingress-operator-5b745b69d9-7r2sv\" (UID: \"4c3fac74-1086-4222-ad88-6c230d11c667\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.230739 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41699b82-4fbd-4bc2-a45c-6971618962df-trusted-ca\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.230805 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d8b72e72-ec67-4394-9d76-d7cfb15566ed-socket-dir\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.245112 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62lng" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.245459 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d8b72e72-ec67-4394-9d76-d7cfb15566ed-csi-data-dir\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.246140 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-registry-tls\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.246368 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e17087d-b6e0-4f2c-85b6-ca20ba2fe561-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kkhmc\" (UID: \"6e17087d-b6e0-4f2c-85b6-ca20ba2fe561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.246638 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d8b72e72-ec67-4394-9d76-d7cfb15566ed-mountpoint-dir\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.247053 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-etcd-client\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.248557 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/89c4198d-192e-4533-9c95-27523486a3fa-certs\") pod \"machine-config-server-55zl9\" (UID: \"89c4198d-192e-4533-9c95-27523486a3fa\") " pod="openshift-machine-config-operator/machine-config-server-55zl9" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.248758 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0a696d4-3301-4fd5-9d70-efa790fbce35-secret-volume\") pod \"collect-profiles-29521950-jtvxc\" (UID: \"b0a696d4-3301-4fd5-9d70-efa790fbce35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" Feb 17 08:43:14 crc kubenswrapper[4813]: E0217 08:43:14.255212 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:14.755181095 +0000 UTC m=+142.415942318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.257193 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d34411f8-63ef-4455-a059-992ecf841688-profile-collector-cert\") pod \"catalog-operator-68c6474976-szxq2\" (UID: \"d34411f8-63ef-4455-a059-992ecf841688\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.257224 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0a696d4-3301-4fd5-9d70-efa790fbce35-config-volume\") pod \"collect-profiles-29521950-jtvxc\" (UID: \"b0a696d4-3301-4fd5-9d70-efa790fbce35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.257248 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d34411f8-63ef-4455-a059-992ecf841688-srv-cert\") pod \"catalog-operator-68c6474976-szxq2\" (UID: \"d34411f8-63ef-4455-a059-992ecf841688\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.257955 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca47b2a4-9965-4fa6-8b80-1e21931f0860-config\") pod \"service-ca-operator-777779d784-77nlz\" (UID: \"ca47b2a4-9965-4fa6-8b80-1e21931f0860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-77nlz" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.258392 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b23fc308-85e3-440a-b2d0-5895fe8b79a9-signing-key\") pod \"service-ca-9c57cc56f-lbvg5\" (UID: \"b23fc308-85e3-440a-b2d0-5895fe8b79a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbvg5" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.259082 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37375f80-f004-4621-b863-326c6e296435-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-whhpn\" (UID: \"37375f80-f004-4621-b863-326c6e296435\") " pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.259693 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d3c0419-3831-4d6b-ada5-cc6a73f8a176-metrics-certs\") pod \"router-default-5444994796-z2wgd\" (UID: \"1d3c0419-3831-4d6b-ada5-cc6a73f8a176\") " pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.259758 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1d3c0419-3831-4d6b-ada5-cc6a73f8a176-default-certificate\") pod \"router-default-5444994796-z2wgd\" (UID: \"1d3c0419-3831-4d6b-ada5-cc6a73f8a176\") " pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.260212 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a6fdaa1-fc85-4de1-b32e-3a9c98834e1f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nx689\" (UID: \"4a6fdaa1-fc85-4de1-b32e-3a9c98834e1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.263423 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/088350fe-751e-41a8-8931-784e2a419e22-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lsz9b\" (UID: \"088350fe-751e-41a8-8931-784e2a419e22\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.264014 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-etcd-ca\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.264383 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/75c49bc8-43a0-46ea-a9ed-ed22f124ea3c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lvbmn\" (UID: \"75c49bc8-43a0-46ea-a9ed-ed22f124ea3c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lvbmn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.267205 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/41699b82-4fbd-4bc2-a45c-6971618962df-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.268888 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-serving-cert\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.269212 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.272421 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/89c4198d-192e-4533-9c95-27523486a3fa-node-bootstrap-token\") pod \"machine-config-server-55zl9\" (UID: \"89c4198d-192e-4533-9c95-27523486a3fa\") " pod="openshift-machine-config-operator/machine-config-server-55zl9" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.290511 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd7nq\" (UniqueName: \"kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-kube-api-access-xd7nq\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.290633 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b614c49-66c0-41c7-bec2-c657495f1a2c-cert\") pod \"ingress-canary-86t4c\" (UID: \"3b614c49-66c0-41c7-bec2-c657495f1a2c\") " pod="openshift-ingress-canary/ingress-canary-86t4c" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.291581 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c3fac74-1086-4222-ad88-6c230d11c667-metrics-tls\") pod \"ingress-operator-5b745b69d9-7r2sv\" (UID: \"4c3fac74-1086-4222-ad88-6c230d11c667\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.296665 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c3fac74-1086-4222-ad88-6c230d11c667-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7r2sv\" (UID: \"4c3fac74-1086-4222-ad88-6c230d11c667\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.314353 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff6zk\" (UniqueName: \"kubernetes.io/projected/37375f80-f004-4621-b863-326c6e296435-kube-api-access-ff6zk\") pod \"marketplace-operator-79b997595-whhpn\" (UID: \"37375f80-f004-4621-b863-326c6e296435\") " pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.320901 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:14 crc kubenswrapper[4813]: E0217 08:43:14.321052 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:14.821029033 +0000 UTC m=+142.481790246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.321155 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b8173c4a-4d8e-4a69-b60f-56807f886bbf-metrics-tls\") pod \"dns-default-lmkjb\" (UID: \"b8173c4a-4d8e-4a69-b60f-56807f886bbf\") " pod="openshift-dns/dns-default-lmkjb" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.321253 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxb95\" (UniqueName: \"kubernetes.io/projected/b8173c4a-4d8e-4a69-b60f-56807f886bbf-kube-api-access-mxb95\") pod \"dns-default-lmkjb\" (UID: \"b8173c4a-4d8e-4a69-b60f-56807f886bbf\") " pod="openshift-dns/dns-default-lmkjb" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.321370 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8173c4a-4d8e-4a69-b60f-56807f886bbf-config-volume\") pod \"dns-default-lmkjb\" (UID: \"b8173c4a-4d8e-4a69-b60f-56807f886bbf\") " pod="openshift-dns/dns-default-lmkjb" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.321898 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: E0217 08:43:14.322191 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:14.822184005 +0000 UTC m=+142.482945228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.322652 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8173c4a-4d8e-4a69-b60f-56807f886bbf-config-volume\") pod \"dns-default-lmkjb\" (UID: \"b8173c4a-4d8e-4a69-b60f-56807f886bbf\") " pod="openshift-dns/dns-default-lmkjb" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.338076 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl2gg\" (UniqueName: \"kubernetes.io/projected/4a6fdaa1-fc85-4de1-b32e-3a9c98834e1f-kube-api-access-zl2gg\") pod \"package-server-manager-789f6589d5-nx689\" (UID: \"4a6fdaa1-fc85-4de1-b32e-3a9c98834e1f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.338996 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b8173c4a-4d8e-4a69-b60f-56807f886bbf-metrics-tls\") pod \"dns-default-lmkjb\" (UID: \"b8173c4a-4d8e-4a69-b60f-56807f886bbf\") " pod="openshift-dns/dns-default-lmkjb" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.342563 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6"] Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.349061 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82xxw\" (UniqueName: \"kubernetes.io/projected/b0a696d4-3301-4fd5-9d70-efa790fbce35-kube-api-access-82xxw\") pod \"collect-profiles-29521950-jtvxc\" (UID: \"b0a696d4-3301-4fd5-9d70-efa790fbce35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.359811 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.364208 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x222k"] Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.381462 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.384942 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdncd\" (UniqueName: \"kubernetes.io/projected/d8b72e72-ec67-4394-9d76-d7cfb15566ed-kube-api-access-hdncd\") pod \"csi-hostpathplugin-pqvf4\" (UID: \"d8b72e72-ec67-4394-9d76-d7cfb15566ed\") " pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.396491 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n46v\" (UniqueName: \"kubernetes.io/projected/d34411f8-63ef-4455-a059-992ecf841688-kube-api-access-7n46v\") pod \"catalog-operator-68c6474976-szxq2\" (UID: \"d34411f8-63ef-4455-a059-992ecf841688\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.397015 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.399213 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mg6kk"] Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.405096 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-75569"] Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.407214 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4"] Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.424385 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:14 crc kubenswrapper[4813]: E0217 08:43:14.424629 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:14.924602917 +0000 UTC m=+142.585364140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.424687 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.424850 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: E0217 08:43:14.425145 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:14.925136622 +0000 UTC m=+142.585897845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.427531 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb6bc\" (UniqueName: \"kubernetes.io/projected/3b614c49-66c0-41c7-bec2-c657495f1a2c-kube-api-access-rb6bc\") pod \"ingress-canary-86t4c\" (UID: \"3b614c49-66c0-41c7-bec2-c657495f1a2c\") " pod="openshift-ingress-canary/ingress-canary-86t4c" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.428696 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-29bxl"] Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.432791 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-bound-sa-token\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.449503 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv6dd\" (UniqueName: \"kubernetes.io/projected/89c4198d-192e-4533-9c95-27523486a3fa-kube-api-access-bv6dd\") pod \"machine-config-server-55zl9\" (UID: \"89c4198d-192e-4533-9c95-27523486a3fa\") " pod="openshift-machine-config-operator/machine-config-server-55zl9" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.480144 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2qqv\" (UniqueName: \"kubernetes.io/projected/6e17087d-b6e0-4f2c-85b6-ca20ba2fe561-kube-api-access-k2qqv\") pod \"kube-storage-version-migrator-operator-b67b599dd-kkhmc\" (UID: \"6e17087d-b6e0-4f2c-85b6-ca20ba2fe561\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.490924 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwn5m\" (UniqueName: \"kubernetes.io/projected/4c3fac74-1086-4222-ad88-6c230d11c667-kube-api-access-bwn5m\") pod \"ingress-operator-5b745b69d9-7r2sv\" (UID: \"4c3fac74-1086-4222-ad88-6c230d11c667\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.513476 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fctw8\" (UniqueName: \"kubernetes.io/projected/b23fc308-85e3-440a-b2d0-5895fe8b79a9-kube-api-access-fctw8\") pod \"service-ca-9c57cc56f-lbvg5\" (UID: \"b23fc308-85e3-440a-b2d0-5895fe8b79a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-lbvg5" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.526630 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:14 crc kubenswrapper[4813]: E0217 08:43:14.527134 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:15.027117752 +0000 UTC m=+142.687878975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:14 crc kubenswrapper[4813]: W0217 08:43:14.539583 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38bc9742_49be_4bb0_924e_2ce0db82ec2e.slice/crio-7ed7a2a44f56e67b9fb3e220b6050942b865f428e5cb60a631d4c2ba4d2afea0 WatchSource:0}: Error finding container 7ed7a2a44f56e67b9fb3e220b6050942b865f428e5cb60a631d4c2ba4d2afea0: Status 404 returned error can't find the container with id 7ed7a2a44f56e67b9fb3e220b6050942b865f428e5cb60a631d4c2ba4d2afea0 Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.539989 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-962d4\" (UniqueName: \"kubernetes.io/projected/1d3c0419-3831-4d6b-ada5-cc6a73f8a176-kube-api-access-962d4\") pod \"router-default-5444994796-z2wgd\" (UID: \"1d3c0419-3831-4d6b-ada5-cc6a73f8a176\") " pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.555569 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.575746 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9kkp\" (UniqueName: \"kubernetes.io/projected/ca47b2a4-9965-4fa6-8b80-1e21931f0860-kube-api-access-b9kkp\") pod \"service-ca-operator-777779d784-77nlz\" (UID: \"ca47b2a4-9965-4fa6-8b80-1e21931f0860\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-77nlz" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.575792 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdn98\" (UniqueName: \"kubernetes.io/projected/088350fe-751e-41a8-8931-784e2a419e22-kube-api-access-fdn98\") pod \"machine-config-operator-74547568cd-lsz9b\" (UID: \"088350fe-751e-41a8-8931-784e2a419e22\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.596121 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r6j2\" (UniqueName: \"kubernetes.io/projected/75c49bc8-43a0-46ea-a9ed-ed22f124ea3c-kube-api-access-4r6j2\") pod \"multus-admission-controller-857f4d67dd-lvbmn\" (UID: \"75c49bc8-43a0-46ea-a9ed-ed22f124ea3c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lvbmn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.613775 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a507abb0-fee7-454a-bfb9-7d4e3e31bf56-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hgx9k\" (UID: \"a507abb0-fee7-454a-bfb9-7d4e3e31bf56\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.619635 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.622441 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.628355 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: E0217 08:43:14.628681 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:15.12865897 +0000 UTC m=+142.789420193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.630453 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.631788 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs298\" (UniqueName: \"kubernetes.io/projected/161ea8a7-bfe7-4a23-b625-21f6f38e5b37-kube-api-access-bs298\") pod \"etcd-operator-b45778765-dqskm\" (UID: \"161ea8a7-bfe7-4a23-b625-21f6f38e5b37\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.647137 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lvbmn" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.668204 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-55zl9" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.670908 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxb95\" (UniqueName: \"kubernetes.io/projected/b8173c4a-4d8e-4a69-b60f-56807f886bbf-kube-api-access-mxb95\") pod \"dns-default-lmkjb\" (UID: \"b8173c4a-4d8e-4a69-b60f-56807f886bbf\") " pod="openshift-dns/dns-default-lmkjb" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.671112 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-cf6d7"] Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.676072 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lbvg5" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.685646 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-77nlz" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.700214 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-86t4c" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.729720 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:14 crc kubenswrapper[4813]: E0217 08:43:14.730027 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:15.229998903 +0000 UTC m=+142.890760136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.730257 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: E0217 08:43:14.731070 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:15.231046282 +0000 UTC m=+142.891807525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.731474 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lmkjb" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.825277 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-7rbpj"] Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.831092 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:14 crc kubenswrapper[4813]: E0217 08:43:14.831217 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:15.331198512 +0000 UTC m=+142.991959735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.831404 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:14 crc kubenswrapper[4813]: E0217 08:43:14.831651 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:15.331643764 +0000 UTC m=+142.992404987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.839589 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.842041 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w"] Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.846103 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.857820 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l"] Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.863369 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k" Feb 17 08:43:14 crc kubenswrapper[4813]: W0217 08:43:14.893830 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3c0419_3831_4d6b_ada5_cc6a73f8a176.slice/crio-415699adf3822b169cc7b05b3210468f0587c2d44688e8d58580e981333816ff WatchSource:0}: Error finding container 415699adf3822b169cc7b05b3210468f0587c2d44688e8d58580e981333816ff: Status 404 returned error can't find the container with id 415699adf3822b169cc7b05b3210468f0587c2d44688e8d58580e981333816ff Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.933602 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:14 crc kubenswrapper[4813]: E0217 08:43:14.934175 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:15.434155889 +0000 UTC m=+143.094917112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:14 crc kubenswrapper[4813]: W0217 08:43:14.938611 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcabcb2d_1368_4303_b9e8_f7fd269ce1ca.slice/crio-96423f69f35bc5f9b15289c251c130d99203ac52bd2d74d693ea9fc416d9ee8f WatchSource:0}: Error finding container 96423f69f35bc5f9b15289c251c130d99203ac52bd2d74d693ea9fc416d9ee8f: Status 404 returned error can't find the container with id 96423f69f35bc5f9b15289c251c130d99203ac52bd2d74d693ea9fc416d9ee8f Feb 17 08:43:14 crc kubenswrapper[4813]: W0217 08:43:14.954263 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69c6afc8_1fa5_44e1_8ba7_bcf6cd506952.slice/crio-4f6e75f1f15cb7048d2e52f15d9fcaedc1a1cf5e8eaa5aae98280f993a360f85 WatchSource:0}: Error finding container 4f6e75f1f15cb7048d2e52f15d9fcaedc1a1cf5e8eaa5aae98280f993a360f85: Status 404 returned error can't find the container with id 4f6e75f1f15cb7048d2e52f15d9fcaedc1a1cf5e8eaa5aae98280f993a360f85 Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.988842 4813 generic.go:334] "Generic (PLEG): container finished" podID="1bd1ba13-d69b-400b-8337-ad2938a9452d" containerID="3a4501c4362d4a8306c6ef02b47270f9802099e43c3eac0a6eaf78e21b5e9210" exitCode=0 Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.988980 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" event={"ID":"1bd1ba13-d69b-400b-8337-ad2938a9452d","Type":"ContainerDied","Data":"3a4501c4362d4a8306c6ef02b47270f9802099e43c3eac0a6eaf78e21b5e9210"} Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.989060 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" event={"ID":"1bd1ba13-d69b-400b-8337-ad2938a9452d","Type":"ContainerStarted","Data":"3b1651726c6301616e9b80e989e42bd3786d202fe5e1d1840d4bd1d15ca05393"} Feb 17 08:43:14 crc kubenswrapper[4813]: I0217 08:43:14.996554 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" event={"ID":"38bc9742-49be-4bb0-924e-2ce0db82ec2e","Type":"ContainerStarted","Data":"7ed7a2a44f56e67b9fb3e220b6050942b865f428e5cb60a631d4c2ba4d2afea0"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.026939 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6" event={"ID":"e48a8873-082c-40b8-8dca-60190e4772b2","Type":"ContainerStarted","Data":"96030cc8ab10b7ea092d2bb690df974ea1358bf9ffd99bb2094d19be73c0b4a1"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.034902 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:15 crc kubenswrapper[4813]: E0217 08:43:15.036419 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:15.536396026 +0000 UTC m=+143.197157249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.037784 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" event={"ID":"bf4b8a5c-06c8-4206-852c-3e58e2e35bca","Type":"ContainerStarted","Data":"2cbcd01b5c4095629402e9aa8e53ae0541b61662aafd29554b673c3cfa8e7f32"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.038052 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" event={"ID":"bf4b8a5c-06c8-4206-852c-3e58e2e35bca","Type":"ContainerStarted","Data":"7ce925c1444d506fdb17ee0e2bd145009b80eb31bf80350bfc8c21edda020a91"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.038532 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" event={"ID":"bf4b8a5c-06c8-4206-852c-3e58e2e35bca","Type":"ContainerStarted","Data":"6758d9f59271d22084cb1c9932268766b96ffc3b8ef2bf823b89a06712d4d958"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.047867 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" event={"ID":"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2","Type":"ContainerStarted","Data":"534d4bb8287cb5216564afabece691d693b9f0df6771a7c99daeb61827727b12"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.047915 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" event={"ID":"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2","Type":"ContainerStarted","Data":"ce7eb9d41cfea79f398de592decb8b44b973f6f1bb312d58172bcf347193c18a"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.049725 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.052532 4813 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-42mx8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.052577 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" podUID="d11448d0-33b5-4b7e-ab63-bf30a82b0cb2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.058101 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh" event={"ID":"712b5668-146d-4a1f-a86d-fb65b56697b7","Type":"ContainerStarted","Data":"9e09313e92cc173f19a4768b126b8bd6268c0afbc5aed747ffd0c840e9a1015e"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.058148 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh" event={"ID":"712b5668-146d-4a1f-a86d-fb65b56697b7","Type":"ContainerStarted","Data":"58f7174401a221f3e4ea92442209a76ff51a0c4e56d686c34434296b77fb125b"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.075756 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" event={"ID":"9f185fcc-0363-430b-a331-0e8ea791f9f6","Type":"ContainerStarted","Data":"266616d216b25dee2694d0a3a2c909b1b11d7d10936e90156fa3d2227e26fe50"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.076456 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" event={"ID":"9f185fcc-0363-430b-a331-0e8ea791f9f6","Type":"ContainerStarted","Data":"f1e91e25c895a73029f1085755deb5d9e7366ec10c742c425a1b07b9ba7b6139"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.076480 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.080497 4813 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fkwkx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.080573 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" podUID="9f185fcc-0363-430b-a331-0e8ea791f9f6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.087397 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9kvp9" event={"ID":"9a5230ee-cb95-4c2b-984a-3ed9286f4c45","Type":"ContainerStarted","Data":"591102de7db31c298a3a2e099515b1cbdb454f695eb90f2c7ef8bcfa0074dd7a"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.087434 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9kvp9" event={"ID":"9a5230ee-cb95-4c2b-984a-3ed9286f4c45","Type":"ContainerStarted","Data":"2a7bd2d101c806235abaa73baff29e174ab125cdac3426b8e6a3ec2c52f90944"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.087444 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9kvp9" event={"ID":"9a5230ee-cb95-4c2b-984a-3ed9286f4c45","Type":"ContainerStarted","Data":"810dbc11e08549fde032f28f92eff79b0b84514752b9f699d46976ab6d35a50c"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.140096 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:15 crc kubenswrapper[4813]: E0217 08:43:15.140263 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:15.640235708 +0000 UTC m=+143.300996931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.141423 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:15 crc kubenswrapper[4813]: E0217 08:43:15.142852 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:15.642835859 +0000 UTC m=+143.303597082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.170378 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" event={"ID":"17c04723-efa9-4b51-9213-3a22b548b114","Type":"ContainerStarted","Data":"cfd5bf163d979f95e9ea3d28827f194a25517049afcb703ea158a26024a39d60"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.170422 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" event={"ID":"17c04723-efa9-4b51-9213-3a22b548b114","Type":"ContainerStarted","Data":"76f65d004af159480cddd85f8f70ceab87f92ec0ec52ab00d200ea45cedf0dae"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.170434 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" event={"ID":"967e0fab-3a56-4541-b3aa-e626e2c524c1","Type":"ContainerStarted","Data":"61073d7901b4f8e651e88eb6b9ef14446bf2f195c4326a4c439ae5567c041e40"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.170448 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" event={"ID":"967e0fab-3a56-4541-b3aa-e626e2c524c1","Type":"ContainerStarted","Data":"a28aad9a925e879ac7d54957439e80ce3418fb1c1556ae732f48b0784d5a523e"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.170457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-z2wgd" event={"ID":"1d3c0419-3831-4d6b-ada5-cc6a73f8a176","Type":"ContainerStarted","Data":"415699adf3822b169cc7b05b3210468f0587c2d44688e8d58580e981333816ff"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.170466 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-cf6d7" event={"ID":"3147530a-17b5-4388-85a2-4644ff82a31a","Type":"ContainerStarted","Data":"7c2ba968dd4e69b2bfef59bcb08903cf19c8bb678152cb9d87b9c0e0a2cf0dbe"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.170476 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-55zl9" event={"ID":"89c4198d-192e-4533-9c95-27523486a3fa","Type":"ContainerStarted","Data":"0c20e6357932962cbc0b525134db18e0a96286101295c0151420f0d26d980195"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.170486 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mg6kk" event={"ID":"5bee7657-82f7-4887-82e8-6029f64d52f5","Type":"ContainerStarted","Data":"969e905d604ae616933c52a87043ef41c49c3f6fb6cc2f379e7cf239daf6dfd5"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.170496 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" event={"ID":"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f","Type":"ContainerStarted","Data":"02fc91d89346497e7d8d5b1c6cfa7d33ab48103418d017484f4d71b6fb8f8e97"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.170505 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x222k" event={"ID":"eb317bb8-7cfa-4865-8390-c7be4460c44b","Type":"ContainerStarted","Data":"0dd499d7e90bcfb5a4c5cfa37eaeff1aaa48a04e4bfed493240d59fdd5854da7"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.170516 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4" event={"ID":"690deab5-f2ed-4fa4-8191-7bd7a625b924","Type":"ContainerStarted","Data":"f39575fd6fc9bd44a1aa33f70dda42c632278ee510049edc17c647f9d86f8b76"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.170525 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4" event={"ID":"690deab5-f2ed-4fa4-8191-7bd7a625b924","Type":"ContainerStarted","Data":"1decf93d178f5788742c6c8c33f4b4ae4f844aaae39af18491bbd85031c66ded"} Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.242383 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:15 crc kubenswrapper[4813]: E0217 08:43:15.242514 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:15.742479115 +0000 UTC m=+143.403240338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.242806 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:15 crc kubenswrapper[4813]: E0217 08:43:15.244839 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:15.74482611 +0000 UTC m=+143.405587383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.354945 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:15 crc kubenswrapper[4813]: E0217 08:43:15.355049 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:15.855031946 +0000 UTC m=+143.515793169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.355403 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:15 crc kubenswrapper[4813]: E0217 08:43:15.355688 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:15.855680694 +0000 UTC m=+143.516441917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.457707 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:15 crc kubenswrapper[4813]: E0217 08:43:15.458210 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:15.958195239 +0000 UTC m=+143.618956462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.561004 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:15 crc kubenswrapper[4813]: E0217 08:43:15.561412 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:16.061400163 +0000 UTC m=+143.722161386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.661735 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:15 crc kubenswrapper[4813]: E0217 08:43:15.662134 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:16.162120638 +0000 UTC m=+143.822881861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.694939 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8j65n"] Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.694992 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh"] Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.695002 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-62lng"] Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.734326 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q"] Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.743469 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-l2l9m"] Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.766228 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:15 crc kubenswrapper[4813]: E0217 08:43:15.767746 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:16.267730018 +0000 UTC m=+143.928491241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:15 crc kubenswrapper[4813]: W0217 08:43:15.832761 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78c22549_ebe5_4e3a_8805_d180911a3c94.slice/crio-8b2bffdc6d894d15cb872be3762a9fcb131186489002aa733bfe65981328abb3 WatchSource:0}: Error finding container 8b2bffdc6d894d15cb872be3762a9fcb131186489002aa733bfe65981328abb3: Status 404 returned error can't find the container with id 8b2bffdc6d894d15cb872be3762a9fcb131186489002aa733bfe65981328abb3 Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.835298 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv"] Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.835641 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pqvf4"] Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.846587 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc"] Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.850990 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lvbmn"] Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.854320 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-77nlz"] Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.858571 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689"] Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.859823 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dqskm"] Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.875740 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:15 crc kubenswrapper[4813]: E0217 08:43:15.876061 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:16.376048583 +0000 UTC m=+144.036809806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:15 crc kubenswrapper[4813]: W0217 08:43:15.920902 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a6fdaa1_fc85_4de1_b32e_3a9c98834e1f.slice/crio-ecf6b55c8efce61aee6b4abc09bc07c9982162238e1b82008a4c23d199f1b6fd WatchSource:0}: Error finding container ecf6b55c8efce61aee6b4abc09bc07c9982162238e1b82008a4c23d199f1b6fd: Status 404 returned error can't find the container with id ecf6b55c8efce61aee6b4abc09bc07c9982162238e1b82008a4c23d199f1b6fd Feb 17 08:43:15 crc kubenswrapper[4813]: W0217 08:43:15.948819 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75c49bc8_43a0_46ea_a9ed_ed22f124ea3c.slice/crio-fb0c51ccda9072bd85fa9180df55d4ee73856642c81b4f21811f5e3039b4becc WatchSource:0}: Error finding container fb0c51ccda9072bd85fa9180df55d4ee73856642c81b4f21811f5e3039b4becc: Status 404 returned error can't find the container with id fb0c51ccda9072bd85fa9180df55d4ee73856642c81b4f21811f5e3039b4becc Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.970902 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lbvg5"] Feb 17 08:43:15 crc kubenswrapper[4813]: I0217 08:43:15.977047 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:15 crc kubenswrapper[4813]: E0217 08:43:15.977364 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:16.477352884 +0000 UTC m=+144.138114107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:15.999195 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-whhpn"] Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:15.999230 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc"] Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.003356 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6"] Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.013620 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b"] Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.015157 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2"] Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.017074 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lmkjb"] Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.031637 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-86t4c"] Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.058195 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k"] Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.079731 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:16 crc kubenswrapper[4813]: E0217 08:43:16.080296 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:16.580276901 +0000 UTC m=+144.241038124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.154577 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7rbpj" event={"ID":"dcabcb2d-1368-4303-b9e8-f7fd269ce1ca","Type":"ContainerStarted","Data":"1ef053bbc8af5d767f96bb388551129ffd3b9d05c977626aeaf5f470305d27d3"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.154624 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-7rbpj" event={"ID":"dcabcb2d-1368-4303-b9e8-f7fd269ce1ca","Type":"ContainerStarted","Data":"96423f69f35bc5f9b15289c251c130d99203ac52bd2d74d693ea9fc416d9ee8f"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.157975 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-7rbpj" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.161467 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-7rbpj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.161539 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7rbpj" podUID="dcabcb2d-1368-4303-b9e8-f7fd269ce1ca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.163817 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l2l9m" event={"ID":"70da8a3c-ff49-4f82-a68b-d955c2cceb2b","Type":"ContainerStarted","Data":"9aa14230565d0c1d1ef730a56409858648d46760d5859e78ba6fb6f7f8caa9d9"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.166111 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mg6kk" event={"ID":"5bee7657-82f7-4887-82e8-6029f64d52f5","Type":"ContainerStarted","Data":"1a6a24a54c255eea5a223f1f12549436d9474fc12a127ae1aaa841804c3a2339"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.166180 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mg6kk" event={"ID":"5bee7657-82f7-4887-82e8-6029f64d52f5","Type":"ContainerStarted","Data":"0ae670efa5d0d3a9a08dbe6247fcead6e547e05a6a9d95a80551a734387f25e0"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.181698 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:16 crc kubenswrapper[4813]: E0217 08:43:16.181977 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:16.681966753 +0000 UTC m=+144.342727976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:16 crc kubenswrapper[4813]: W0217 08:43:16.183382 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b614c49_66c0_41c7_bec2_c657495f1a2c.slice/crio-de9d207481734754e5f1434784eb3ed07a15370020c372e38f458929db08c6e1 WatchSource:0}: Error finding container de9d207481734754e5f1434784eb3ed07a15370020c372e38f458929db08c6e1: Status 404 returned error can't find the container with id de9d207481734754e5f1434784eb3ed07a15370020c372e38f458929db08c6e1 Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.183680 4813 generic.go:334] "Generic (PLEG): container finished" podID="38bc9742-49be-4bb0-924e-2ce0db82ec2e" containerID="5f2ab361fd19eac6cde0142021c625f155308a372da9503f330bac104d26bcf9" exitCode=0 Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.184243 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" event={"ID":"38bc9742-49be-4bb0-924e-2ce0db82ec2e","Type":"ContainerDied","Data":"5f2ab361fd19eac6cde0142021c625f155308a372da9503f330bac104d26bcf9"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.203264 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" event={"ID":"d8b72e72-ec67-4394-9d76-d7cfb15566ed","Type":"ContainerStarted","Data":"fdb88477b954d1a18cda66bfe8a62ab872be43b844c8d1ed3a51e71fc6d4d39b"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.212405 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lbvg5" event={"ID":"b23fc308-85e3-440a-b2d0-5895fe8b79a9","Type":"ContainerStarted","Data":"caf4e48616600aa0b0a8bfc628b1d7cfa673f1aa86847062c56bea5d37edfc0e"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.214269 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l" event={"ID":"91672759-be09-404a-b4f4-ccbb995f9209","Type":"ContainerStarted","Data":"9259f1358869369a8023127c8006cf358f0dc9a76abc6f6071fbbdfb5dc507ea"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.214291 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l" event={"ID":"91672759-be09-404a-b4f4-ccbb995f9209","Type":"ContainerStarted","Data":"3c3dd733948b892606d6256784387b76cb9766b8add78ac5a62d6e9c9db22354"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.230787 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" event={"ID":"088350fe-751e-41a8-8931-784e2a419e22","Type":"ContainerStarted","Data":"77ae311ec5e5197b9b042163ff67202e1b87fa02af230958d704afb9e5ffb4eb"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.241500 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-z2wgd" event={"ID":"1d3c0419-3831-4d6b-ada5-cc6a73f8a176","Type":"ContainerStarted","Data":"7cad15739916410e66f8231f58261f873818435f65c686a4ec483bad8d186469"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.268756 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-cf6d7" event={"ID":"3147530a-17b5-4388-85a2-4644ff82a31a","Type":"ContainerStarted","Data":"b8e89dd8f0c4e76e5b9e12970e49ae94cf5dc41b94ca04e1bcd942f427bf920d"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.270298 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-cf6d7" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.273655 4813 patch_prober.go:28] interesting pod/console-operator-58897d9998-cf6d7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/readyz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.273707 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-cf6d7" podUID="3147530a-17b5-4388-85a2-4644ff82a31a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/readyz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.275890 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" event={"ID":"4c3fac74-1086-4222-ad88-6c230d11c667","Type":"ContainerStarted","Data":"1e30367282411ce7541f7418cbeb55666784928124487384cabedd7938664d81"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.283631 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:16 crc kubenswrapper[4813]: E0217 08:43:16.284413 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:16.784372155 +0000 UTC m=+144.445133368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.291457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" event={"ID":"161ea8a7-bfe7-4a23-b625-21f6f38e5b37","Type":"ContainerStarted","Data":"6cf4c0abae67fef8fa3d7b7a10865ac1d6e89c22de890fe846ec36244a478cd9"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.296961 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" event={"ID":"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f","Type":"ContainerStarted","Data":"8b3de59e92b5b22e3b789566eff99c9775a3238906ef5388c289b16b40f7d712"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.299349 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.302670 4813 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-29bxl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.303951 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" podUID="bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.304718 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-qxxm8" podStartSLOduration=123.304709163 podStartE2EDuration="2m3.304709163s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:16.304357564 +0000 UTC m=+143.965118787" watchObservedRunningTime="2026-02-17 08:43:16.304709163 +0000 UTC m=+143.965470386" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.319610 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6" event={"ID":"627203b8-948f-4881-8398-ecd21cc274f6","Type":"ContainerStarted","Data":"cf6b7578208f393950f0f811011fea3277bd8b6bbd96cb7ad0575f4f085f027e"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.324105 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" event={"ID":"b0a696d4-3301-4fd5-9d70-efa790fbce35","Type":"ContainerStarted","Data":"19b316341b1e2857aa23b414199774487b82f365cedda4590119f891a76d21df"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.338415 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6" podStartSLOduration=123.338400428 podStartE2EDuration="2m3.338400428s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:16.337817882 +0000 UTC m=+143.998579105" watchObservedRunningTime="2026-02-17 08:43:16.338400428 +0000 UTC m=+143.999161651" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.338811 4813 generic.go:334] "Generic (PLEG): container finished" podID="eb317bb8-7cfa-4865-8390-c7be4460c44b" containerID="7ac859f5acf7f428595f3623c69839f688a87ae41f954e873a1276f1ec69be8d" exitCode=0 Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.338976 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x222k" event={"ID":"eb317bb8-7cfa-4865-8390-c7be4460c44b","Type":"ContainerDied","Data":"7ac859f5acf7f428595f3623c69839f688a87ae41f954e873a1276f1ec69be8d"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.367936 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-55zl9" event={"ID":"89c4198d-192e-4533-9c95-27523486a3fa","Type":"ContainerStarted","Data":"f26154b0ec4c89d555e9dd6c7906149fbc967112a0da1564d02b87545c1f42ba"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.372498 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lmkjb" event={"ID":"b8173c4a-4d8e-4a69-b60f-56807f886bbf","Type":"ContainerStarted","Data":"5a65a07538777e5f7660dc464b8ca5b1b74dd32ac9eb22a090b4090cd864d5cb"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.384415 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" event={"ID":"37375f80-f004-4621-b863-326c6e296435","Type":"ContainerStarted","Data":"14e676424034ce3e41ee7477e9b93708861f62b2f0a52b6953d071dc7c6acf50"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.384945 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:16 crc kubenswrapper[4813]: E0217 08:43:16.386251 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:16.886240162 +0000 UTC m=+144.547001385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.391530 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9kvp9" podStartSLOduration=123.391520937 podStartE2EDuration="2m3.391520937s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:16.384615097 +0000 UTC m=+144.045376320" watchObservedRunningTime="2026-02-17 08:43:16.391520937 +0000 UTC m=+144.052282160" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.426686 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lvbmn" event={"ID":"75c49bc8-43a0-46ea-a9ed-ed22f124ea3c","Type":"ContainerStarted","Data":"fb0c51ccda9072bd85fa9180df55d4ee73856642c81b4f21811f5e3039b4becc"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.427979 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q" event={"ID":"78c22549-ebe5-4e3a-8805-d180911a3c94","Type":"ContainerStarted","Data":"8b2bffdc6d894d15cb872be3762a9fcb131186489002aa733bfe65981328abb3"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.439212 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" event={"ID":"edb8ea95-7464-414b-8208-03a3a8426d74","Type":"ContainerStarted","Data":"943199d59ab25d85b6c4519c17ec410148441318344507bd5abf83a85b1cf4af"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.440050 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.448490 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-77nlz" event={"ID":"ca47b2a4-9965-4fa6-8b80-1e21931f0860","Type":"ContainerStarted","Data":"b1ee34750215743b469a49f45a217c4f6120a988be7a472468fd6294739bc607"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.451134 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-8r6b6" event={"ID":"e48a8873-082c-40b8-8dca-60190e4772b2","Type":"ContainerStarted","Data":"05492546ccda790a3c1c8969e0b82edeb379092d7f50ca3bd35ff9323d0ba3e2"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.454233 4813 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7bmzh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.454272 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" podUID="edb8ea95-7464-414b-8208-03a3a8426d74" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.459398 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" event={"ID":"1bd1ba13-d69b-400b-8337-ad2938a9452d","Type":"ContainerStarted","Data":"f106ef956990fbff4fedd91b6f67ed841cdcd73483946a2a609172aef00b63aa"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.485938 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:16 crc kubenswrapper[4813]: E0217 08:43:16.486245 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:16.986213217 +0000 UTC m=+144.646974440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.495006 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8j65n" event={"ID":"e5f29fce-8bc9-48cb-b808-f55cb2e25c31","Type":"ContainerStarted","Data":"c5fe73d4766423968269b0b9d89bce9cb5eaeaeb25e8921c54edcd07aad47b0e"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.495058 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8j65n" event={"ID":"e5f29fce-8bc9-48cb-b808-f55cb2e25c31","Type":"ContainerStarted","Data":"ad967fbee0048b90ed30332e272747bb7aee1a48213e8909ae92ee846e877c61"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.509150 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689" event={"ID":"4a6fdaa1-fc85-4de1-b32e-3a9c98834e1f","Type":"ContainerStarted","Data":"ecf6b55c8efce61aee6b4abc09bc07c9982162238e1b82008a4c23d199f1b6fd"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.526621 4813 csr.go:261] certificate signing request csr-682hn is approved, waiting to be issued Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.534600 4813 csr.go:257] certificate signing request csr-682hn is issued Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.535600 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" event={"ID":"69c6afc8-1fa5-44e1-8ba7-bcf6cd506952","Type":"ContainerStarted","Data":"60d7cc3f5d46fc7c556d17c8f8d58cfca70a54904bbc5a6d5c4be68dd62e8c46"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.535741 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" event={"ID":"69c6afc8-1fa5-44e1-8ba7-bcf6cd506952","Type":"ContainerStarted","Data":"4f6e75f1f15cb7048d2e52f15d9fcaedc1a1cf5e8eaa5aae98280f993a360f85"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.561067 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" event={"ID":"d34411f8-63ef-4455-a059-992ecf841688","Type":"ContainerStarted","Data":"14a85c4788a003a2fe4c1b1c53af25d004ad46748535bea064947d33aeef25b6"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.571038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62lng" event={"ID":"11abe3c6-6bcb-4453-ba5d-71329e039ccc","Type":"ContainerStarted","Data":"75e072e89550616a83d42c23ac1110e29ca2617d1b237b9916aab3bdf63ee2ac"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.574777 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4" event={"ID":"690deab5-f2ed-4fa4-8191-7bd7a625b924","Type":"ContainerStarted","Data":"fd125d1a011c1d75b0046e78d6297ead8e8f206e7fcbc4db2afb114f0500169d"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.579301 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc" event={"ID":"6e17087d-b6e0-4f2c-85b6-ca20ba2fe561","Type":"ContainerStarted","Data":"44ac133ff2b5c9fdd6963de866197f026aa9e7e17affacf9afb948ba6110c412"} Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.592848 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:16 crc kubenswrapper[4813]: E0217 08:43:16.594498 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:17.09448303 +0000 UTC m=+144.755244253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.594842 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" podStartSLOduration=122.5948243 podStartE2EDuration="2m2.5948243s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:16.59375597 +0000 UTC m=+144.254517193" watchObservedRunningTime="2026-02-17 08:43:16.5948243 +0000 UTC m=+144.255585543" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.597405 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.602575 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.672008 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7rx4l" podStartSLOduration=123.671994689 podStartE2EDuration="2m3.671994689s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:16.669910281 +0000 UTC m=+144.330671504" watchObservedRunningTime="2026-02-17 08:43:16.671994689 +0000 UTC m=+144.332755912" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.696014 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:16 crc kubenswrapper[4813]: E0217 08:43:16.697155 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:17.197133539 +0000 UTC m=+144.857894762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.793049 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-gjzhl" podStartSLOduration=122.793034172 podStartE2EDuration="2m2.793034172s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:16.76710748 +0000 UTC m=+144.427868693" watchObservedRunningTime="2026-02-17 08:43:16.793034172 +0000 UTC m=+144.453795395" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.798455 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:16 crc kubenswrapper[4813]: E0217 08:43:16.798780 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:17.29876726 +0000 UTC m=+144.959528483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.842327 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.849490 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:16 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:16 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:16 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.849551 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:16 crc kubenswrapper[4813]: I0217 08:43:16.899809 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:16 crc kubenswrapper[4813]: E0217 08:43:16.900067 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:17.400053291 +0000 UTC m=+145.060814514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.002394 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:17 crc kubenswrapper[4813]: E0217 08:43:17.002708 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:17.502696449 +0000 UTC m=+145.163457672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.105055 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:17 crc kubenswrapper[4813]: E0217 08:43:17.105220 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:17.605193683 +0000 UTC m=+145.265954906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.105286 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:17 crc kubenswrapper[4813]: E0217 08:43:17.105817 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:17.60581043 +0000 UTC m=+145.266571643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.148322 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" podStartSLOduration=124.148291206 podStartE2EDuration="2m4.148291206s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:17.082165311 +0000 UTC m=+144.742926534" watchObservedRunningTime="2026-02-17 08:43:17.148291206 +0000 UTC m=+144.809052429" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.178880 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hzjbh" podStartSLOduration=124.178863336 podStartE2EDuration="2m4.178863336s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:17.146688642 +0000 UTC m=+144.807449865" watchObservedRunningTime="2026-02-17 08:43:17.178863336 +0000 UTC m=+144.839624559" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.211120 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:17 crc kubenswrapper[4813]: E0217 08:43:17.214226 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:17.714199036 +0000 UTC m=+145.374960259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.218958 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:17 crc kubenswrapper[4813]: E0217 08:43:17.219258 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:17.719247895 +0000 UTC m=+145.380009118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.309931 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qrm4l" podStartSLOduration=123.309916174 podStartE2EDuration="2m3.309916174s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:17.308385722 +0000 UTC m=+144.969146945" watchObservedRunningTime="2026-02-17 08:43:17.309916174 +0000 UTC m=+144.970677397" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.320703 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:17 crc kubenswrapper[4813]: E0217 08:43:17.321068 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:17.821033529 +0000 UTC m=+145.481794752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.383197 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mg6kk" podStartSLOduration=123.383179356 podStartE2EDuration="2m3.383179356s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:17.382343973 +0000 UTC m=+145.043105196" watchObservedRunningTime="2026-02-17 08:43:17.383179356 +0000 UTC m=+145.043940579" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.384197 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-7rbpj" podStartSLOduration=124.384190724 podStartE2EDuration="2m4.384190724s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:17.346023756 +0000 UTC m=+145.006784979" watchObservedRunningTime="2026-02-17 08:43:17.384190724 +0000 UTC m=+145.044951947" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.422067 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:17 crc kubenswrapper[4813]: E0217 08:43:17.422407 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:17.922394813 +0000 UTC m=+145.583156026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.494115 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-77nlz" podStartSLOduration=123.494099732 podStartE2EDuration="2m3.494099732s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:17.492773325 +0000 UTC m=+145.153534548" watchObservedRunningTime="2026-02-17 08:43:17.494099732 +0000 UTC m=+145.154860945" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.525007 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:17 crc kubenswrapper[4813]: E0217 08:43:17.525740 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:18.02571497 +0000 UTC m=+145.686476193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.532497 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" podStartSLOduration=124.532480276 podStartE2EDuration="2m4.532480276s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:17.531699434 +0000 UTC m=+145.192460657" watchObservedRunningTime="2026-02-17 08:43:17.532480276 +0000 UTC m=+145.193241499" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.535547 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-17 08:38:16 +0000 UTC, rotation deadline is 2026-12-17 02:48:11.042783211 +0000 UTC Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.535654 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7266h4m53.507152119s for next certificate rotation Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.631320 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-cf6d7" podStartSLOduration=124.631288299 podStartE2EDuration="2m4.631288299s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:17.596141124 +0000 UTC m=+145.256902347" watchObservedRunningTime="2026-02-17 08:43:17.631288299 +0000 UTC m=+145.292049522" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.641746 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:17 crc kubenswrapper[4813]: E0217 08:43:17.642003 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:18.141991003 +0000 UTC m=+145.802752226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.670689 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lvbmn" event={"ID":"75c49bc8-43a0-46ea-a9ed-ed22f124ea3c","Type":"ContainerStarted","Data":"7beafa3c184ae9e3315704850aaab0517fddc5e1731f60b39bf6653d3711a151"} Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.697986 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zw6c4" podStartSLOduration=123.69796965 podStartE2EDuration="2m3.69796965s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:17.641035976 +0000 UTC m=+145.301797199" watchObservedRunningTime="2026-02-17 08:43:17.69796965 +0000 UTC m=+145.358730873" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.726481 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" podStartSLOduration=123.726465632 podStartE2EDuration="2m3.726465632s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:17.700190721 +0000 UTC m=+145.360951944" watchObservedRunningTime="2026-02-17 08:43:17.726465632 +0000 UTC m=+145.387226855" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.742719 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:17 crc kubenswrapper[4813]: E0217 08:43:17.743086 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:18.243070348 +0000 UTC m=+145.903831571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.744138 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-86t4c" event={"ID":"3b614c49-66c0-41c7-bec2-c657495f1a2c","Type":"ContainerStarted","Data":"de9d207481734754e5f1434784eb3ed07a15370020c372e38f458929db08c6e1"} Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.758507 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l2l9m" event={"ID":"70da8a3c-ff49-4f82-a68b-d955c2cceb2b","Type":"ContainerStarted","Data":"0ea50fdae480e0e3268b26ca90eb0863ada075b85118159e5b67ebbfb2040d47"} Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.781603 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" event={"ID":"37375f80-f004-4621-b863-326c6e296435","Type":"ContainerStarted","Data":"bc2fa73bc2d866e1799e9cd3706957babe9eccf0a9ed26dabda33a1c640d0a31"} Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.781914 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8j65n" podStartSLOduration=123.781891794 podStartE2EDuration="2m3.781891794s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:17.724609421 +0000 UTC m=+145.385370634" watchObservedRunningTime="2026-02-17 08:43:17.781891794 +0000 UTC m=+145.442653017" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.782428 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.785622 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" event={"ID":"088350fe-751e-41a8-8931-784e2a419e22","Type":"ContainerStarted","Data":"fc44f957308d868954accc8576c4a71341ee888b29bcfe80600740c991147a41"} Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.806899 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc" event={"ID":"6e17087d-b6e0-4f2c-85b6-ca20ba2fe561","Type":"ContainerStarted","Data":"e88ea4a07cf49b7702e9736cffff14c7b3e164d983a80bb4dd8a0e86f09a2f71"} Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.823661 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-55zl9" podStartSLOduration=6.823645371 podStartE2EDuration="6.823645371s" podCreationTimestamp="2026-02-17 08:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:17.802736447 +0000 UTC m=+145.463497670" watchObservedRunningTime="2026-02-17 08:43:17.823645371 +0000 UTC m=+145.484406594" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.834575 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" event={"ID":"b0a696d4-3301-4fd5-9d70-efa790fbce35","Type":"ContainerStarted","Data":"68568d543f23cc857e8587ee26f434a66a9e26c00cfbf3514d39704756d6b9b8"} Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.835025 4813 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-whhpn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.835094 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" podUID="37375f80-f004-4621-b863-326c6e296435" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.837005 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" podStartSLOduration=123.836985097 podStartE2EDuration="2m3.836985097s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:17.818665964 +0000 UTC m=+145.479427177" watchObservedRunningTime="2026-02-17 08:43:17.836985097 +0000 UTC m=+145.497746320" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.839440 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-77nlz" event={"ID":"ca47b2a4-9965-4fa6-8b80-1e21931f0860","Type":"ContainerStarted","Data":"0a6148d33a5748ebc4a22b8f6db8e7c4bf6a8c9e146c4ac1f9b78bfba47c1f6c"} Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.841807 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.842965 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62lng" event={"ID":"11abe3c6-6bcb-4453-ba5d-71329e039ccc","Type":"ContainerStarted","Data":"27f8bf064bb577b54415239f8de8c0e628294b9bb8a689c8dc3f0203182ca5f3"} Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.842990 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62lng" event={"ID":"11abe3c6-6bcb-4453-ba5d-71329e039ccc","Type":"ContainerStarted","Data":"baab777b41103683dbbaa05c543a6c95670da345a8a285ef123cbe6d8ed63b11"} Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.846317 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:17 crc kubenswrapper[4813]: E0217 08:43:17.847282 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:18.347269559 +0000 UTC m=+146.008030782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.848528 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:17 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:17 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:17 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.848582 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.866632 4813 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-szxq2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.866685 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" podUID="d34411f8-63ef-4455-a059-992ecf841688" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.872857 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" event={"ID":"edb8ea95-7464-414b-8208-03a3a8426d74","Type":"ContainerStarted","Data":"eaab1471b9892658f83af9f26aae89ef7ba443e2be5907b02ee9a27bfc899add"} Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.874040 4813 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7bmzh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.874069 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" podUID="edb8ea95-7464-414b-8208-03a3a8426d74" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.875568 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-z2wgd" podStartSLOduration=123.875561076 podStartE2EDuration="2m3.875561076s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:17.873909121 +0000 UTC m=+145.534670344" watchObservedRunningTime="2026-02-17 08:43:17.875561076 +0000 UTC m=+145.536322299" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.889279 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6" event={"ID":"627203b8-948f-4881-8398-ecd21cc274f6","Type":"ContainerStarted","Data":"c6a18e2746f70f3f82617e42915440c0bf43c9d3b42654d48923be0e0ea166fe"} Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.904076 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-n8v5w" podStartSLOduration=123.904063269 podStartE2EDuration="2m3.904063269s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:17.901853028 +0000 UTC m=+145.562614251" watchObservedRunningTime="2026-02-17 08:43:17.904063269 +0000 UTC m=+145.564824492" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.919237 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k" event={"ID":"a507abb0-fee7-454a-bfb9-7d4e3e31bf56","Type":"ContainerStarted","Data":"8fddf1a1492ae88f4e4b0e1d11e1cbb6c53682ea549ce09eeadec400ba0ad0e5"} Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.947079 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:17 crc kubenswrapper[4813]: E0217 08:43:17.948146 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:18.448123789 +0000 UTC m=+146.108885012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.963406 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lmkjb" event={"ID":"b8173c4a-4d8e-4a69-b60f-56807f886bbf","Type":"ContainerStarted","Data":"0ed650988b20f02e7d97649ef64674189bb7f7c63a1896591e6ec57a0dbe225b"} Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.995149 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" podStartSLOduration=123.99513642 podStartE2EDuration="2m3.99513642s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:17.993859925 +0000 UTC m=+145.654621148" watchObservedRunningTime="2026-02-17 08:43:17.99513642 +0000 UTC m=+145.655897643" Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.995525 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689" event={"ID":"4a6fdaa1-fc85-4de1-b32e-3a9c98834e1f","Type":"ContainerStarted","Data":"5fca9f8b11bd8fac1614ca91366a59f84839fc8d6c1d229699785e338fc1ebd1"} Feb 17 08:43:17 crc kubenswrapper[4813]: I0217 08:43:17.996206 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.013539 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q" event={"ID":"78c22549-ebe5-4e3a-8805-d180911a3c94","Type":"ContainerStarted","Data":"5c8fd33ec58c9b81b747b6e56c3d8f20f91167bd74af702d7b99b640972e8aa7"} Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.015739 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.021899 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k" podStartSLOduration=124.021885004 podStartE2EDuration="2m4.021885004s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:18.019858048 +0000 UTC m=+145.680619271" watchObservedRunningTime="2026-02-17 08:43:18.021885004 +0000 UTC m=+145.682646227" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.053333 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:18 crc kubenswrapper[4813]: E0217 08:43:18.053743 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:18.553708648 +0000 UTC m=+146.214469871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.064672 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.079604 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" event={"ID":"4c3fac74-1086-4222-ad88-6c230d11c667","Type":"ContainerStarted","Data":"45eb9e9bf8cfc12736af97c0828a885fec55798452d317f5f2caf5a671d355a2"} Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.096969 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-7rbpj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.097024 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7rbpj" podUID="dcabcb2d-1368-4303-b9e8-f7fd269ce1ca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.098772 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-l2l9m" podStartSLOduration=125.098754335 podStartE2EDuration="2m5.098754335s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:18.096685818 +0000 UTC m=+145.757447041" watchObservedRunningTime="2026-02-17 08:43:18.098754335 +0000 UTC m=+145.759515548" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.098955 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfxq6" podStartSLOduration=124.09894955 podStartE2EDuration="2m4.09894955s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:18.050372626 +0000 UTC m=+145.711133849" watchObservedRunningTime="2026-02-17 08:43:18.09894955 +0000 UTC m=+145.759710773" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.140948 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" podStartSLOduration=124.140931273 podStartE2EDuration="2m4.140931273s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:18.138441265 +0000 UTC m=+145.799202488" watchObservedRunningTime="2026-02-17 08:43:18.140931273 +0000 UTC m=+145.801692496" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.154475 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:18 crc kubenswrapper[4813]: E0217 08:43:18.156083 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:18.656056418 +0000 UTC m=+146.316817641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.201213 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" podStartSLOduration=124.201194368 podStartE2EDuration="2m4.201194368s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:18.190187826 +0000 UTC m=+145.850949049" watchObservedRunningTime="2026-02-17 08:43:18.201194368 +0000 UTC m=+145.861955591" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.237258 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-62lng" podStartSLOduration=124.237239378 podStartE2EDuration="2m4.237239378s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:18.22386186 +0000 UTC m=+145.884623083" watchObservedRunningTime="2026-02-17 08:43:18.237239378 +0000 UTC m=+145.898000601" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.267419 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:18 crc kubenswrapper[4813]: E0217 08:43:18.267885 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:18.767872009 +0000 UTC m=+146.428633232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.268247 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kkhmc" podStartSLOduration=124.268233839 podStartE2EDuration="2m4.268233839s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:18.266686436 +0000 UTC m=+145.927447659" watchObservedRunningTime="2026-02-17 08:43:18.268233839 +0000 UTC m=+145.928995062" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.295800 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.332127 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dj88q" podStartSLOduration=124.332114613 podStartE2EDuration="2m4.332114613s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:18.330068357 +0000 UTC m=+145.990829580" watchObservedRunningTime="2026-02-17 08:43:18.332114613 +0000 UTC m=+145.992875836" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.370863 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:18 crc kubenswrapper[4813]: E0217 08:43:18.371375 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:18.87135223 +0000 UTC m=+146.532113453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.388107 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689" podStartSLOduration=124.38809285 podStartE2EDuration="2m4.38809285s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:18.383690519 +0000 UTC m=+146.044451742" watchObservedRunningTime="2026-02-17 08:43:18.38809285 +0000 UTC m=+146.048854063" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.455124 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" podStartSLOduration=124.45510982 podStartE2EDuration="2m4.45510982s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:18.453676551 +0000 UTC m=+146.114437774" watchObservedRunningTime="2026-02-17 08:43:18.45510982 +0000 UTC m=+146.115871043" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.472280 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:18 crc kubenswrapper[4813]: E0217 08:43:18.472664 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:18.972650602 +0000 UTC m=+146.633411825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.573746 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:18 crc kubenswrapper[4813]: E0217 08:43:18.573914 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:19.073888452 +0000 UTC m=+146.734649675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.574276 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:18 crc kubenswrapper[4813]: E0217 08:43:18.574561 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:19.07455235 +0000 UTC m=+146.735313573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.643107 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.643148 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.652530 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.675215 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:18 crc kubenswrapper[4813]: E0217 08:43:18.675563 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:19.175532773 +0000 UTC m=+146.836293996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.776391 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:18 crc kubenswrapper[4813]: E0217 08:43:18.776692 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:19.27667964 +0000 UTC m=+146.937440863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.789737 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-cf6d7" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.846167 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:18 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:18 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:18 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.846244 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.877228 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:18 crc kubenswrapper[4813]: E0217 08:43:18.877556 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:19.37754227 +0000 UTC m=+147.038303493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:18 crc kubenswrapper[4813]: I0217 08:43:18.978336 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:18 crc kubenswrapper[4813]: E0217 08:43:18.978604 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:19.478593394 +0000 UTC m=+147.139354617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.079100 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:19 crc kubenswrapper[4813]: E0217 08:43:19.079434 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:19.579413513 +0000 UTC m=+147.240174736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.090511 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" event={"ID":"088350fe-751e-41a8-8931-784e2a419e22","Type":"ContainerStarted","Data":"658ffe7f51aa12475a7370cf52ee39a989bc04b7d0118c2b298a41b95077863d"} Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.094374 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7r2sv" event={"ID":"4c3fac74-1086-4222-ad88-6c230d11c667","Type":"ContainerStarted","Data":"e748b669c5bebfacca4f1a844b79ede196a509292916a6b168c984538d8ca502"} Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.096333 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" event={"ID":"d34411f8-63ef-4455-a059-992ecf841688","Type":"ContainerStarted","Data":"ab2bbecf51f64fa9eb267439aa9ebe98b9573bab5a9de2f96d04a1a95292c6cd"} Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.099981 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hgx9k" event={"ID":"a507abb0-fee7-454a-bfb9-7d4e3e31bf56","Type":"ContainerStarted","Data":"73e69c0b938161ee1f335e27ef4a49d021faf24776e6097d02dc77224408588a"} Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.102933 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-86t4c" event={"ID":"3b614c49-66c0-41c7-bec2-c657495f1a2c","Type":"ContainerStarted","Data":"b8fae18b5e2cb656e62d08cbfc03518274559d5518e10d674290bdb17aa38e53"} Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.109197 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" event={"ID":"161ea8a7-bfe7-4a23-b625-21f6f38e5b37","Type":"ContainerStarted","Data":"899456dfa64037e25c7316816d048cb0c08eaea585399ed03b581b1f3f6d62b3"} Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.126439 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-szxq2" Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.134290 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" event={"ID":"38bc9742-49be-4bb0-924e-2ce0db82ec2e","Type":"ContainerStarted","Data":"730f973450734438385d3f414889867925e3886e2950bb40faad100449a1edee"} Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.139504 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.153945 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-dqskm" podStartSLOduration=125.153932089 podStartE2EDuration="2m5.153932089s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:19.153167518 +0000 UTC m=+146.813928741" watchObservedRunningTime="2026-02-17 08:43:19.153932089 +0000 UTC m=+146.814693312" Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.154501 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lsz9b" podStartSLOduration=125.154495115 podStartE2EDuration="2m5.154495115s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:19.126905807 +0000 UTC m=+146.787667030" watchObservedRunningTime="2026-02-17 08:43:19.154495115 +0000 UTC m=+146.815256338" Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.180488 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689" event={"ID":"4a6fdaa1-fc85-4de1-b32e-3a9c98834e1f","Type":"ContainerStarted","Data":"75541d8003cd51cc4c1f68753da4c490affacdcffb3e9318ec6c34165886de9c"} Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.181510 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:19 crc kubenswrapper[4813]: E0217 08:43:19.183211 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:19.683198523 +0000 UTC m=+147.343959746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.209674 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lvbmn" event={"ID":"75c49bc8-43a0-46ea-a9ed-ed22f124ea3c","Type":"ContainerStarted","Data":"84fad2b67b9500bab06b172cdd7c6aa143a2c090559b1917aa4d31a7c5cb37a6"} Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.237185 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-86t4c" podStartSLOduration=8.237169085 podStartE2EDuration="8.237169085s" podCreationTimestamp="2026-02-17 08:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:19.23555643 +0000 UTC m=+146.896317653" watchObservedRunningTime="2026-02-17 08:43:19.237169085 +0000 UTC m=+146.897930308" Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.239823 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" event={"ID":"d8b72e72-ec67-4394-9d76-d7cfb15566ed","Type":"ContainerStarted","Data":"e679cc463bb26a1fba82ac485419bd0a7f12fa205772414ee1605296265946bf"} Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.242475 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x222k" event={"ID":"eb317bb8-7cfa-4865-8390-c7be4460c44b","Type":"ContainerStarted","Data":"8c3ba394d256fd0b8ab133e383ea7a66aa0c64052a0705d270e4a7a92dc32411"} Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.242493 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x222k" event={"ID":"eb317bb8-7cfa-4865-8390-c7be4460c44b","Type":"ContainerStarted","Data":"31dd2842671f0e8b03efb97f9506bca071db6f424392e087fca2ae4211733681"} Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.245173 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lbvg5" event={"ID":"b23fc308-85e3-440a-b2d0-5895fe8b79a9","Type":"ContainerStarted","Data":"bc21643cc1a7a6b8adccb7e5fcc9523a3e878e06f44e231973418611ef835d25"} Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.251112 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lmkjb" event={"ID":"b8173c4a-4d8e-4a69-b60f-56807f886bbf","Type":"ContainerStarted","Data":"7d440d5c067aaeba862eabf3f6ca1491d6d74eb5a8e031d2eef6a044a27b5aaa"} Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.251909 4813 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-whhpn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.251948 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" podUID="37375f80-f004-4621-b863-326c6e296435" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.251921 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lmkjb" Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.275459 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fkd2j" Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.279743 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-lvbmn" podStartSLOduration=125.279718873 podStartE2EDuration="2m5.279718873s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:19.268066573 +0000 UTC m=+146.928827796" watchObservedRunningTime="2026-02-17 08:43:19.279718873 +0000 UTC m=+146.940480096" Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.282797 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:19 crc kubenswrapper[4813]: E0217 08:43:19.287051 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:19.787019634 +0000 UTC m=+147.447780857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.287729 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7bmzh" Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.306221 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" podStartSLOduration=126.30620388 podStartE2EDuration="2m6.30620388s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:19.303558508 +0000 UTC m=+146.964319731" watchObservedRunningTime="2026-02-17 08:43:19.30620388 +0000 UTC m=+146.966965103" Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.372053 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-lbvg5" podStartSLOduration=125.372039378 podStartE2EDuration="2m5.372039378s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:19.341378806 +0000 UTC m=+147.002140019" watchObservedRunningTime="2026-02-17 08:43:19.372039378 +0000 UTC m=+147.032800601" Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.386464 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:19 crc kubenswrapper[4813]: E0217 08:43:19.390096 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:19.890084234 +0000 UTC m=+147.550845457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.414334 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-x222k" podStartSLOduration=126.414317449 podStartE2EDuration="2m6.414317449s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:19.374463435 +0000 UTC m=+147.035224648" watchObservedRunningTime="2026-02-17 08:43:19.414317449 +0000 UTC m=+147.075078672" Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.455884 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lmkjb" podStartSLOduration=8.45586829 podStartE2EDuration="8.45586829s" podCreationTimestamp="2026-02-17 08:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:19.414575246 +0000 UTC m=+147.075336459" watchObservedRunningTime="2026-02-17 08:43:19.45586829 +0000 UTC m=+147.116629513" Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.487842 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:19 crc kubenswrapper[4813]: E0217 08:43:19.488221 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:19.988205458 +0000 UTC m=+147.648966681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.589927 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:19 crc kubenswrapper[4813]: E0217 08:43:19.590348 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:20.090330792 +0000 UTC m=+147.751092015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.690772 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:19 crc kubenswrapper[4813]: E0217 08:43:19.691518 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:20.19150263 +0000 UTC m=+147.852263853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.792627 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:19 crc kubenswrapper[4813]: E0217 08:43:19.792925 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:20.292914855 +0000 UTC m=+147.953676078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.843202 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:19 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:19 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:19 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.843273 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.894098 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:19 crc kubenswrapper[4813]: E0217 08:43:19.894347 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:20.394277938 +0000 UTC m=+148.055039161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.894572 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:19 crc kubenswrapper[4813]: E0217 08:43:19.895011 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:20.394995478 +0000 UTC m=+148.055756701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.996029 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:19 crc kubenswrapper[4813]: E0217 08:43:19.996213 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:20.496187437 +0000 UTC m=+148.156948660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:19 crc kubenswrapper[4813]: I0217 08:43:19.996488 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:19 crc kubenswrapper[4813]: E0217 08:43:19.996767 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:20.496755172 +0000 UTC m=+148.157516395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.033518 4813 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.097922 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:20 crc kubenswrapper[4813]: E0217 08:43:20.098083 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:20.598058164 +0000 UTC m=+148.258819387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.098261 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:20 crc kubenswrapper[4813]: E0217 08:43:20.098523 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:20.598511896 +0000 UTC m=+148.259273119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.199761 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:20 crc kubenswrapper[4813]: E0217 08:43:20.199964 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:20.699935691 +0000 UTC m=+148.360696924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.200232 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:20 crc kubenswrapper[4813]: E0217 08:43:20.200541 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:20.700533348 +0000 UTC m=+148.361294571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.257223 4813 generic.go:334] "Generic (PLEG): container finished" podID="b0a696d4-3301-4fd5-9d70-efa790fbce35" containerID="68568d543f23cc857e8587ee26f434a66a9e26c00cfbf3514d39704756d6b9b8" exitCode=0 Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.257339 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" event={"ID":"b0a696d4-3301-4fd5-9d70-efa790fbce35","Type":"ContainerDied","Data":"68568d543f23cc857e8587ee26f434a66a9e26c00cfbf3514d39704756d6b9b8"} Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.261168 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" event={"ID":"d8b72e72-ec67-4394-9d76-d7cfb15566ed","Type":"ContainerStarted","Data":"9acb0dc4e8b2baffc91fb5be0d0b53ae77755b08e92f6008f2c8f1e7a348bbb5"} Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.261215 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" event={"ID":"d8b72e72-ec67-4394-9d76-d7cfb15566ed","Type":"ContainerStarted","Data":"e01c99f8f7b45574bbab992a0d93c1b3ddcd395d18f46ff50a204b94648fbb2d"} Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.261225 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" event={"ID":"d8b72e72-ec67-4394-9d76-d7cfb15566ed","Type":"ContainerStarted","Data":"f9b37d3ebe19c60c11f2025f1de295acef6a4c849899b5afcdd727db64151f9f"} Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.264751 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.267867 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-75569" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.298089 4813 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-17T08:43:20.033545883Z","Handler":null,"Name":""} Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.301026 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:20 crc kubenswrapper[4813]: E0217 08:43:20.301219 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 08:43:20.801195872 +0000 UTC m=+148.461957095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.301421 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:20 crc kubenswrapper[4813]: E0217 08:43:20.301714 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 08:43:20.801701756 +0000 UTC m=+148.462462979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9pw9r" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.304759 4813 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.304779 4813 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.346834 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-pqvf4" podStartSLOduration=9.346817185 podStartE2EDuration="9.346817185s" podCreationTimestamp="2026-02-17 08:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:20.34591146 +0000 UTC m=+148.006672683" watchObservedRunningTime="2026-02-17 08:43:20.346817185 +0000 UTC m=+148.007578408" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.402344 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.409968 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.503862 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.507197 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.507229 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.529173 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9pw9r\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.638393 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.721099 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xxj57"] Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.722855 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.724822 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.739602 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xxj57"] Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.807124 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/064c46bd-0e88-4dca-9a42-923b3eae48a1-catalog-content\") pod \"community-operators-xxj57\" (UID: \"064c46bd-0e88-4dca-9a42-923b3eae48a1\") " pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.807202 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsvpn\" (UniqueName: \"kubernetes.io/projected/064c46bd-0e88-4dca-9a42-923b3eae48a1-kube-api-access-tsvpn\") pod \"community-operators-xxj57\" (UID: \"064c46bd-0e88-4dca-9a42-923b3eae48a1\") " pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.807223 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/064c46bd-0e88-4dca-9a42-923b3eae48a1-utilities\") pod \"community-operators-xxj57\" (UID: \"064c46bd-0e88-4dca-9a42-923b3eae48a1\") " pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.819930 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.820640 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.824356 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.824870 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.838354 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.843020 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:20 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:20 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:20 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.843059 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.883054 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9pw9r"] Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.907991 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c45ae7e7-0aba-442e-85d3-c2a10878b247-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c45ae7e7-0aba-442e-85d3-c2a10878b247\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.908075 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsvpn\" (UniqueName: \"kubernetes.io/projected/064c46bd-0e88-4dca-9a42-923b3eae48a1-kube-api-access-tsvpn\") pod \"community-operators-xxj57\" (UID: \"064c46bd-0e88-4dca-9a42-923b3eae48a1\") " pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.908103 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/064c46bd-0e88-4dca-9a42-923b3eae48a1-utilities\") pod \"community-operators-xxj57\" (UID: \"064c46bd-0e88-4dca-9a42-923b3eae48a1\") " pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.908170 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c45ae7e7-0aba-442e-85d3-c2a10878b247-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c45ae7e7-0aba-442e-85d3-c2a10878b247\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.908199 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/064c46bd-0e88-4dca-9a42-923b3eae48a1-catalog-content\") pod \"community-operators-xxj57\" (UID: \"064c46bd-0e88-4dca-9a42-923b3eae48a1\") " pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.908773 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/064c46bd-0e88-4dca-9a42-923b3eae48a1-catalog-content\") pod \"community-operators-xxj57\" (UID: \"064c46bd-0e88-4dca-9a42-923b3eae48a1\") " pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.909591 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/064c46bd-0e88-4dca-9a42-923b3eae48a1-utilities\") pod \"community-operators-xxj57\" (UID: \"064c46bd-0e88-4dca-9a42-923b3eae48a1\") " pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.913178 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4wtt9"] Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.914347 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.921469 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.928743 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsvpn\" (UniqueName: \"kubernetes.io/projected/064c46bd-0e88-4dca-9a42-923b3eae48a1-kube-api-access-tsvpn\") pod \"community-operators-xxj57\" (UID: \"064c46bd-0e88-4dca-9a42-923b3eae48a1\") " pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:43:20 crc kubenswrapper[4813]: I0217 08:43:20.929422 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wtt9"] Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.009061 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0624db-755c-4a56-afd4-02eeb8b8b1db-catalog-content\") pod \"certified-operators-4wtt9\" (UID: \"0d0624db-755c-4a56-afd4-02eeb8b8b1db\") " pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.009202 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8psm\" (UniqueName: \"kubernetes.io/projected/0d0624db-755c-4a56-afd4-02eeb8b8b1db-kube-api-access-j8psm\") pod \"certified-operators-4wtt9\" (UID: \"0d0624db-755c-4a56-afd4-02eeb8b8b1db\") " pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.009286 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c45ae7e7-0aba-442e-85d3-c2a10878b247-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c45ae7e7-0aba-442e-85d3-c2a10878b247\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.009375 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.009416 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0624db-755c-4a56-afd4-02eeb8b8b1db-utilities\") pod \"certified-operators-4wtt9\" (UID: \"0d0624db-755c-4a56-afd4-02eeb8b8b1db\") " pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.009436 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c45ae7e7-0aba-442e-85d3-c2a10878b247-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c45ae7e7-0aba-442e-85d3-c2a10878b247\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.009467 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.009530 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c45ae7e7-0aba-442e-85d3-c2a10878b247-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c45ae7e7-0aba-442e-85d3-c2a10878b247\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.012768 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.013199 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.031030 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c45ae7e7-0aba-442e-85d3-c2a10878b247-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c45ae7e7-0aba-442e-85d3-c2a10878b247\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.058560 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.113596 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0624db-755c-4a56-afd4-02eeb8b8b1db-catalog-content\") pod \"certified-operators-4wtt9\" (UID: \"0d0624db-755c-4a56-afd4-02eeb8b8b1db\") " pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.113640 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8psm\" (UniqueName: \"kubernetes.io/projected/0d0624db-755c-4a56-afd4-02eeb8b8b1db-kube-api-access-j8psm\") pod \"certified-operators-4wtt9\" (UID: \"0d0624db-755c-4a56-afd4-02eeb8b8b1db\") " pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.113675 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.113706 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0624db-755c-4a56-afd4-02eeb8b8b1db-utilities\") pod \"certified-operators-4wtt9\" (UID: \"0d0624db-755c-4a56-afd4-02eeb8b8b1db\") " pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.113723 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.114950 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0624db-755c-4a56-afd4-02eeb8b8b1db-catalog-content\") pod \"certified-operators-4wtt9\" (UID: \"0d0624db-755c-4a56-afd4-02eeb8b8b1db\") " pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.116542 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0624db-755c-4a56-afd4-02eeb8b8b1db-utilities\") pod \"certified-operators-4wtt9\" (UID: \"0d0624db-755c-4a56-afd4-02eeb8b8b1db\") " pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.123879 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.127875 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.153647 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.154593 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8psm\" (UniqueName: \"kubernetes.io/projected/0d0624db-755c-4a56-afd4-02eeb8b8b1db-kube-api-access-j8psm\") pod \"certified-operators-4wtt9\" (UID: \"0d0624db-755c-4a56-afd4-02eeb8b8b1db\") " pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.170724 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.181541 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.188607 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.189125 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w257s"] Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.190016 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w257s"] Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.190095 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w257s" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.230877 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.244577 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.298725 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" event={"ID":"41699b82-4fbd-4bc2-a45c-6971618962df","Type":"ContainerStarted","Data":"1e0d36ca4c88d246c6378b0896045bfc8f6c9af3eae4e93ca976fab6d3ed34af"} Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.298770 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" event={"ID":"41699b82-4fbd-4bc2-a45c-6971618962df","Type":"ContainerStarted","Data":"04567e2f1a1502fd0886e5ea8fc1c4287a46a5110465403433a0cfacdfede547"} Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.315155 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npwj4\" (UniqueName: \"kubernetes.io/projected/0ae65165-a983-4b8c-8478-55c0853def8a-kube-api-access-npwj4\") pod \"community-operators-w257s\" (UID: \"0ae65165-a983-4b8c-8478-55c0853def8a\") " pod="openshift-marketplace/community-operators-w257s" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.315521 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ae65165-a983-4b8c-8478-55c0853def8a-catalog-content\") pod \"community-operators-w257s\" (UID: \"0ae65165-a983-4b8c-8478-55c0853def8a\") " pod="openshift-marketplace/community-operators-w257s" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.315556 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ae65165-a983-4b8c-8478-55c0853def8a-utilities\") pod \"community-operators-w257s\" (UID: \"0ae65165-a983-4b8c-8478-55c0853def8a\") " pod="openshift-marketplace/community-operators-w257s" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.319154 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jhzww"] Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.320263 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.329167 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" podStartSLOduration=127.329142417 podStartE2EDuration="2m7.329142417s" podCreationTimestamp="2026-02-17 08:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:21.317053586 +0000 UTC m=+148.977814809" watchObservedRunningTime="2026-02-17 08:43:21.329142417 +0000 UTC m=+148.989903640" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.338422 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jhzww"] Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.416231 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npwj4\" (UniqueName: \"kubernetes.io/projected/0ae65165-a983-4b8c-8478-55c0853def8a-kube-api-access-npwj4\") pod \"community-operators-w257s\" (UID: \"0ae65165-a983-4b8c-8478-55c0853def8a\") " pod="openshift-marketplace/community-operators-w257s" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.416277 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t86sj\" (UniqueName: \"kubernetes.io/projected/12b56a13-1891-46e8-9f6a-0045496cb7ee-kube-api-access-t86sj\") pod \"certified-operators-jhzww\" (UID: \"12b56a13-1891-46e8-9f6a-0045496cb7ee\") " pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.416321 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ae65165-a983-4b8c-8478-55c0853def8a-catalog-content\") pod \"community-operators-w257s\" (UID: \"0ae65165-a983-4b8c-8478-55c0853def8a\") " pod="openshift-marketplace/community-operators-w257s" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.416338 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12b56a13-1891-46e8-9f6a-0045496cb7ee-utilities\") pod \"certified-operators-jhzww\" (UID: \"12b56a13-1891-46e8-9f6a-0045496cb7ee\") " pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.416401 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ae65165-a983-4b8c-8478-55c0853def8a-utilities\") pod \"community-operators-w257s\" (UID: \"0ae65165-a983-4b8c-8478-55c0853def8a\") " pod="openshift-marketplace/community-operators-w257s" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.416518 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12b56a13-1891-46e8-9f6a-0045496cb7ee-catalog-content\") pod \"certified-operators-jhzww\" (UID: \"12b56a13-1891-46e8-9f6a-0045496cb7ee\") " pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.418064 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ae65165-a983-4b8c-8478-55c0853def8a-catalog-content\") pod \"community-operators-w257s\" (UID: \"0ae65165-a983-4b8c-8478-55c0853def8a\") " pod="openshift-marketplace/community-operators-w257s" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.418833 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ae65165-a983-4b8c-8478-55c0853def8a-utilities\") pod \"community-operators-w257s\" (UID: \"0ae65165-a983-4b8c-8478-55c0853def8a\") " pod="openshift-marketplace/community-operators-w257s" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.435896 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npwj4\" (UniqueName: \"kubernetes.io/projected/0ae65165-a983-4b8c-8478-55c0853def8a-kube-api-access-npwj4\") pod \"community-operators-w257s\" (UID: \"0ae65165-a983-4b8c-8478-55c0853def8a\") " pod="openshift-marketplace/community-operators-w257s" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.503468 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w257s" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.517913 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12b56a13-1891-46e8-9f6a-0045496cb7ee-catalog-content\") pod \"certified-operators-jhzww\" (UID: \"12b56a13-1891-46e8-9f6a-0045496cb7ee\") " pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.517983 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t86sj\" (UniqueName: \"kubernetes.io/projected/12b56a13-1891-46e8-9f6a-0045496cb7ee-kube-api-access-t86sj\") pod \"certified-operators-jhzww\" (UID: \"12b56a13-1891-46e8-9f6a-0045496cb7ee\") " pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.518006 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12b56a13-1891-46e8-9f6a-0045496cb7ee-utilities\") pod \"certified-operators-jhzww\" (UID: \"12b56a13-1891-46e8-9f6a-0045496cb7ee\") " pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.518390 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12b56a13-1891-46e8-9f6a-0045496cb7ee-catalog-content\") pod \"certified-operators-jhzww\" (UID: \"12b56a13-1891-46e8-9f6a-0045496cb7ee\") " pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.518432 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12b56a13-1891-46e8-9f6a-0045496cb7ee-utilities\") pod \"certified-operators-jhzww\" (UID: \"12b56a13-1891-46e8-9f6a-0045496cb7ee\") " pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.534241 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t86sj\" (UniqueName: \"kubernetes.io/projected/12b56a13-1891-46e8-9f6a-0045496cb7ee-kube-api-access-t86sj\") pod \"certified-operators-jhzww\" (UID: \"12b56a13-1891-46e8-9f6a-0045496cb7ee\") " pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.627531 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wtt9"] Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.644758 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xxj57"] Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.651756 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" Feb 17 08:43:21 crc kubenswrapper[4813]: W0217 08:43:21.654227 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d0624db_755c_4a56_afd4_02eeb8b8b1db.slice/crio-a7f92c038175d0c5cef83f1070edde485350d1f5e55f3dfed1de97a8a5158851 WatchSource:0}: Error finding container a7f92c038175d0c5cef83f1070edde485350d1f5e55f3dfed1de97a8a5158851: Status 404 returned error can't find the container with id a7f92c038175d0c5cef83f1070edde485350d1f5e55f3dfed1de97a8a5158851 Feb 17 08:43:21 crc kubenswrapper[4813]: W0217 08:43:21.655331 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod064c46bd_0e88_4dca_9a42_923b3eae48a1.slice/crio-9466a489ddeb45e261233f7d1b5a97a8f620a8ff3d454ad8a23d7e2ea40b42d2 WatchSource:0}: Error finding container 9466a489ddeb45e261233f7d1b5a97a8f620a8ff3d454ad8a23d7e2ea40b42d2: Status 404 returned error can't find the container with id 9466a489ddeb45e261233f7d1b5a97a8f620a8ff3d454ad8a23d7e2ea40b42d2 Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.665451 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.703323 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 08:43:21 crc kubenswrapper[4813]: W0217 08:43:21.710097 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc45ae7e7_0aba_442e_85d3_c2a10878b247.slice/crio-fc97f573c9bb3fb0da4f94253546b792e9b6c3c1deba75254eb682be2654c18f WatchSource:0}: Error finding container fc97f573c9bb3fb0da4f94253546b792e9b6c3c1deba75254eb682be2654c18f: Status 404 returned error can't find the container with id fc97f573c9bb3fb0da4f94253546b792e9b6c3c1deba75254eb682be2654c18f Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.720063 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82xxw\" (UniqueName: \"kubernetes.io/projected/b0a696d4-3301-4fd5-9d70-efa790fbce35-kube-api-access-82xxw\") pod \"b0a696d4-3301-4fd5-9d70-efa790fbce35\" (UID: \"b0a696d4-3301-4fd5-9d70-efa790fbce35\") " Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.720103 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0a696d4-3301-4fd5-9d70-efa790fbce35-secret-volume\") pod \"b0a696d4-3301-4fd5-9d70-efa790fbce35\" (UID: \"b0a696d4-3301-4fd5-9d70-efa790fbce35\") " Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.720158 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0a696d4-3301-4fd5-9d70-efa790fbce35-config-volume\") pod \"b0a696d4-3301-4fd5-9d70-efa790fbce35\" (UID: \"b0a696d4-3301-4fd5-9d70-efa790fbce35\") " Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.722694 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a696d4-3301-4fd5-9d70-efa790fbce35-config-volume" (OuterVolumeSpecName: "config-volume") pod "b0a696d4-3301-4fd5-9d70-efa790fbce35" (UID: "b0a696d4-3301-4fd5-9d70-efa790fbce35"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.726017 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a696d4-3301-4fd5-9d70-efa790fbce35-kube-api-access-82xxw" (OuterVolumeSpecName: "kube-api-access-82xxw") pod "b0a696d4-3301-4fd5-9d70-efa790fbce35" (UID: "b0a696d4-3301-4fd5-9d70-efa790fbce35"). InnerVolumeSpecName "kube-api-access-82xxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.726559 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a696d4-3301-4fd5-9d70-efa790fbce35-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b0a696d4-3301-4fd5-9d70-efa790fbce35" (UID: "b0a696d4-3301-4fd5-9d70-efa790fbce35"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:43:21 crc kubenswrapper[4813]: W0217 08:43:21.784609 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-d0e9b64fdcaf249dcc737797e78cb8c543cba40befc692e0c39e4b231897aacb WatchSource:0}: Error finding container d0e9b64fdcaf249dcc737797e78cb8c543cba40befc692e0c39e4b231897aacb: Status 404 returned error can't find the container with id d0e9b64fdcaf249dcc737797e78cb8c543cba40befc692e0c39e4b231897aacb Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.822056 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82xxw\" (UniqueName: \"kubernetes.io/projected/b0a696d4-3301-4fd5-9d70-efa790fbce35-kube-api-access-82xxw\") on node \"crc\" DevicePath \"\"" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.822106 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0a696d4-3301-4fd5-9d70-efa790fbce35-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.822115 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0a696d4-3301-4fd5-9d70-efa790fbce35-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.848762 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:21 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:21 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:21 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.848813 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.944242 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w257s"] Feb 17 08:43:21 crc kubenswrapper[4813]: W0217 08:43:21.968988 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ae65165_a983_4b8c_8478_55c0853def8a.slice/crio-6080a785ef1b930aa4c920d3839200a2c3fa2471c1e198d3d22ce74aee1ebc98 WatchSource:0}: Error finding container 6080a785ef1b930aa4c920d3839200a2c3fa2471c1e198d3d22ce74aee1ebc98: Status 404 returned error can't find the container with id 6080a785ef1b930aa4c920d3839200a2c3fa2471c1e198d3d22ce74aee1ebc98 Feb 17 08:43:21 crc kubenswrapper[4813]: I0217 08:43:21.995210 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jhzww"] Feb 17 08:43:22 crc kubenswrapper[4813]: W0217 08:43:22.118843 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12b56a13_1891_46e8_9f6a_0045496cb7ee.slice/crio-734990c438a97ed38fbbee6afd835b60060edc26edf09fc279a0d4829c7bb5a4 WatchSource:0}: Error finding container 734990c438a97ed38fbbee6afd835b60060edc26edf09fc279a0d4829c7bb5a4: Status 404 returned error can't find the container with id 734990c438a97ed38fbbee6afd835b60060edc26edf09fc279a0d4829c7bb5a4 Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.307563 4813 generic.go:334] "Generic (PLEG): container finished" podID="0ae65165-a983-4b8c-8478-55c0853def8a" containerID="4146cbc3815fe78cff4f29ffb66f38a4e8d0a094dc47dcec3c77f297dd866610" exitCode=0 Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.307644 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w257s" event={"ID":"0ae65165-a983-4b8c-8478-55c0853def8a","Type":"ContainerDied","Data":"4146cbc3815fe78cff4f29ffb66f38a4e8d0a094dc47dcec3c77f297dd866610"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.308053 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w257s" event={"ID":"0ae65165-a983-4b8c-8478-55c0853def8a","Type":"ContainerStarted","Data":"6080a785ef1b930aa4c920d3839200a2c3fa2471c1e198d3d22ce74aee1ebc98"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.310888 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.312170 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c45ae7e7-0aba-442e-85d3-c2a10878b247","Type":"ContainerStarted","Data":"67c91a8298debdd8e3e693f70e189bd0217a7876f3a21f80e91b4ba59c13c3c5"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.312228 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c45ae7e7-0aba-442e-85d3-c2a10878b247","Type":"ContainerStarted","Data":"fc97f573c9bb3fb0da4f94253546b792e9b6c3c1deba75254eb682be2654c18f"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.314462 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"43d9ef101c58b270717cad96bf9e21a7253d4f9c660545d145fc690368e50b1f"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.314495 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"921b11276e77ef45f93f73b75ad546d053ba337adf4a807d572f4c84e08b6082"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.319549 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6a2fbb8f095661b5bdeb1bba562abfa16671b57a43dd11f1b2b5b729c68a894e"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.319600 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8afc19da9e5e5503500d31e0e822b2dc7dfc37f1192d5f94a53678443f2a702d"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.320334 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.322913 4813 generic.go:334] "Generic (PLEG): container finished" podID="064c46bd-0e88-4dca-9a42-923b3eae48a1" containerID="d02f5508bccd8ed1dadc049170da963a78caee4176bebe1b53d90327dd61c200" exitCode=0 Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.322971 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxj57" event={"ID":"064c46bd-0e88-4dca-9a42-923b3eae48a1","Type":"ContainerDied","Data":"d02f5508bccd8ed1dadc049170da963a78caee4176bebe1b53d90327dd61c200"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.322990 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxj57" event={"ID":"064c46bd-0e88-4dca-9a42-923b3eae48a1","Type":"ContainerStarted","Data":"9466a489ddeb45e261233f7d1b5a97a8f620a8ff3d454ad8a23d7e2ea40b42d2"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.330544 4813 generic.go:334] "Generic (PLEG): container finished" podID="12b56a13-1891-46e8-9f6a-0045496cb7ee" containerID="43a483c1ff801d6a1ee37a1bf2e1525272df2e03534aab4011f0ff1fcc00e5eb" exitCode=0 Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.330643 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhzww" event={"ID":"12b56a13-1891-46e8-9f6a-0045496cb7ee","Type":"ContainerDied","Data":"43a483c1ff801d6a1ee37a1bf2e1525272df2e03534aab4011f0ff1fcc00e5eb"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.330670 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhzww" event={"ID":"12b56a13-1891-46e8-9f6a-0045496cb7ee","Type":"ContainerStarted","Data":"734990c438a97ed38fbbee6afd835b60060edc26edf09fc279a0d4829c7bb5a4"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.332234 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a37de13e0b6c331cf6817d23e3b528342597bb4909392f6c97c64d2969faeb54"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.332255 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d0e9b64fdcaf249dcc737797e78cb8c543cba40befc692e0c39e4b231897aacb"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.335987 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" event={"ID":"b0a696d4-3301-4fd5-9d70-efa790fbce35","Type":"ContainerDied","Data":"19b316341b1e2857aa23b414199774487b82f365cedda4590119f891a76d21df"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.336008 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b316341b1e2857aa23b414199774487b82f365cedda4590119f891a76d21df" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.336077 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.337580 4813 generic.go:334] "Generic (PLEG): container finished" podID="0d0624db-755c-4a56-afd4-02eeb8b8b1db" containerID="a6d3617fcfc1613ad54c5a7bc8c19999a88585719b96c8537763829cafefd81a" exitCode=0 Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.337620 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wtt9" event={"ID":"0d0624db-755c-4a56-afd4-02eeb8b8b1db","Type":"ContainerDied","Data":"a6d3617fcfc1613ad54c5a7bc8c19999a88585719b96c8537763829cafefd81a"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.337646 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wtt9" event={"ID":"0d0624db-755c-4a56-afd4-02eeb8b8b1db","Type":"ContainerStarted","Data":"a7f92c038175d0c5cef83f1070edde485350d1f5e55f3dfed1de97a8a5158851"} Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.337748 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.398412 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.398394168 podStartE2EDuration="2.398394168s" podCreationTimestamp="2026-02-17 08:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:22.39735074 +0000 UTC m=+150.058111963" watchObservedRunningTime="2026-02-17 08:43:22.398394168 +0000 UTC m=+150.059155391" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.726110 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4gsnb"] Feb 17 08:43:22 crc kubenswrapper[4813]: E0217 08:43:22.726320 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a696d4-3301-4fd5-9d70-efa790fbce35" containerName="collect-profiles" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.726334 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a696d4-3301-4fd5-9d70-efa790fbce35" containerName="collect-profiles" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.726463 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a696d4-3301-4fd5-9d70-efa790fbce35" containerName="collect-profiles" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.727284 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.731864 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.746236 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gsnb"] Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.847002 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:22 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:22 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:22 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.847064 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.875882 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d05c6fb-46ad-4722-b20f-c42bba042431-catalog-content\") pod \"redhat-marketplace-4gsnb\" (UID: \"2d05c6fb-46ad-4722-b20f-c42bba042431\") " pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.875934 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d05c6fb-46ad-4722-b20f-c42bba042431-utilities\") pod \"redhat-marketplace-4gsnb\" (UID: \"2d05c6fb-46ad-4722-b20f-c42bba042431\") " pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.875959 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42jvh\" (UniqueName: \"kubernetes.io/projected/2d05c6fb-46ad-4722-b20f-c42bba042431-kube-api-access-42jvh\") pod \"redhat-marketplace-4gsnb\" (UID: \"2d05c6fb-46ad-4722-b20f-c42bba042431\") " pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.977281 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d05c6fb-46ad-4722-b20f-c42bba042431-catalog-content\") pod \"redhat-marketplace-4gsnb\" (UID: \"2d05c6fb-46ad-4722-b20f-c42bba042431\") " pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.977608 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d05c6fb-46ad-4722-b20f-c42bba042431-utilities\") pod \"redhat-marketplace-4gsnb\" (UID: \"2d05c6fb-46ad-4722-b20f-c42bba042431\") " pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.977639 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42jvh\" (UniqueName: \"kubernetes.io/projected/2d05c6fb-46ad-4722-b20f-c42bba042431-kube-api-access-42jvh\") pod \"redhat-marketplace-4gsnb\" (UID: \"2d05c6fb-46ad-4722-b20f-c42bba042431\") " pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.977995 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d05c6fb-46ad-4722-b20f-c42bba042431-catalog-content\") pod \"redhat-marketplace-4gsnb\" (UID: \"2d05c6fb-46ad-4722-b20f-c42bba042431\") " pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.978021 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d05c6fb-46ad-4722-b20f-c42bba042431-utilities\") pod \"redhat-marketplace-4gsnb\" (UID: \"2d05c6fb-46ad-4722-b20f-c42bba042431\") " pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:43:22 crc kubenswrapper[4813]: I0217 08:43:22.999461 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42jvh\" (UniqueName: \"kubernetes.io/projected/2d05c6fb-46ad-4722-b20f-c42bba042431-kube-api-access-42jvh\") pod \"redhat-marketplace-4gsnb\" (UID: \"2d05c6fb-46ad-4722-b20f-c42bba042431\") " pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.046164 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.128357 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xzz2v"] Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.129242 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xzz2v"] Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.129342 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.280930 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add7ab28-9545-450a-9423-dfff6734b0eb-catalog-content\") pod \"redhat-marketplace-xzz2v\" (UID: \"add7ab28-9545-450a-9423-dfff6734b0eb\") " pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.281028 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql8n4\" (UniqueName: \"kubernetes.io/projected/add7ab28-9545-450a-9423-dfff6734b0eb-kube-api-access-ql8n4\") pod \"redhat-marketplace-xzz2v\" (UID: \"add7ab28-9545-450a-9423-dfff6734b0eb\") " pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.281051 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add7ab28-9545-450a-9423-dfff6734b0eb-utilities\") pod \"redhat-marketplace-xzz2v\" (UID: \"add7ab28-9545-450a-9423-dfff6734b0eb\") " pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.363440 4813 generic.go:334] "Generic (PLEG): container finished" podID="c45ae7e7-0aba-442e-85d3-c2a10878b247" containerID="67c91a8298debdd8e3e693f70e189bd0217a7876f3a21f80e91b4ba59c13c3c5" exitCode=0 Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.363509 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c45ae7e7-0aba-442e-85d3-c2a10878b247","Type":"ContainerDied","Data":"67c91a8298debdd8e3e693f70e189bd0217a7876f3a21f80e91b4ba59c13c3c5"} Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.382558 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql8n4\" (UniqueName: \"kubernetes.io/projected/add7ab28-9545-450a-9423-dfff6734b0eb-kube-api-access-ql8n4\") pod \"redhat-marketplace-xzz2v\" (UID: \"add7ab28-9545-450a-9423-dfff6734b0eb\") " pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.382945 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add7ab28-9545-450a-9423-dfff6734b0eb-utilities\") pod \"redhat-marketplace-xzz2v\" (UID: \"add7ab28-9545-450a-9423-dfff6734b0eb\") " pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.382989 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add7ab28-9545-450a-9423-dfff6734b0eb-catalog-content\") pod \"redhat-marketplace-xzz2v\" (UID: \"add7ab28-9545-450a-9423-dfff6734b0eb\") " pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.383453 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add7ab28-9545-450a-9423-dfff6734b0eb-catalog-content\") pod \"redhat-marketplace-xzz2v\" (UID: \"add7ab28-9545-450a-9423-dfff6734b0eb\") " pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.383602 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add7ab28-9545-450a-9423-dfff6734b0eb-utilities\") pod \"redhat-marketplace-xzz2v\" (UID: \"add7ab28-9545-450a-9423-dfff6734b0eb\") " pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.412838 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql8n4\" (UniqueName: \"kubernetes.io/projected/add7ab28-9545-450a-9423-dfff6734b0eb-kube-api-access-ql8n4\") pod \"redhat-marketplace-xzz2v\" (UID: \"add7ab28-9545-450a-9423-dfff6734b0eb\") " pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.461869 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.587510 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gsnb"] Feb 17 08:43:23 crc kubenswrapper[4813]: W0217 08:43:23.612975 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d05c6fb_46ad_4722_b20f_c42bba042431.slice/crio-1b70a931587f26b9b904b3e14f146f1f8b54c16ab51edec769931754bc7ebf6f WatchSource:0}: Error finding container 1b70a931587f26b9b904b3e14f146f1f8b54c16ab51edec769931754bc7ebf6f: Status 404 returned error can't find the container with id 1b70a931587f26b9b904b3e14f146f1f8b54c16ab51edec769931754bc7ebf6f Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.753571 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xzz2v"] Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.842726 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:23 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:23 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:23 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.842882 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.923329 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rmz42"] Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.940702 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.943591 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmz42"] Feb 17 08:43:23 crc kubenswrapper[4813]: I0217 08:43:23.956396 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.005197 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e827307-21c0-4712-ab98-e95d277f4201-catalog-content\") pod \"redhat-operators-rmz42\" (UID: \"8e827307-21c0-4712-ab98-e95d277f4201\") " pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.005243 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz9dc\" (UniqueName: \"kubernetes.io/projected/8e827307-21c0-4712-ab98-e95d277f4201-kube-api-access-jz9dc\") pod \"redhat-operators-rmz42\" (UID: \"8e827307-21c0-4712-ab98-e95d277f4201\") " pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.005270 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e827307-21c0-4712-ab98-e95d277f4201-utilities\") pod \"redhat-operators-rmz42\" (UID: \"8e827307-21c0-4712-ab98-e95d277f4201\") " pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.082082 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.107640 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e827307-21c0-4712-ab98-e95d277f4201-utilities\") pod \"redhat-operators-rmz42\" (UID: \"8e827307-21c0-4712-ab98-e95d277f4201\") " pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.107837 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e827307-21c0-4712-ab98-e95d277f4201-catalog-content\") pod \"redhat-operators-rmz42\" (UID: \"8e827307-21c0-4712-ab98-e95d277f4201\") " pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.108049 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz9dc\" (UniqueName: \"kubernetes.io/projected/8e827307-21c0-4712-ab98-e95d277f4201-kube-api-access-jz9dc\") pod \"redhat-operators-rmz42\" (UID: \"8e827307-21c0-4712-ab98-e95d277f4201\") " pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.108241 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e827307-21c0-4712-ab98-e95d277f4201-utilities\") pod \"redhat-operators-rmz42\" (UID: \"8e827307-21c0-4712-ab98-e95d277f4201\") " pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.108336 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e827307-21c0-4712-ab98-e95d277f4201-catalog-content\") pod \"redhat-operators-rmz42\" (UID: \"8e827307-21c0-4712-ab98-e95d277f4201\") " pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.115935 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-7rbpj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.115982 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-7rbpj" podUID="dcabcb2d-1368-4303-b9e8-f7fd269ce1ca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.123531 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-7rbpj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.123592 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-7rbpj" podUID="dcabcb2d-1368-4303-b9e8-f7fd269ce1ca" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.131450 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.132099 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.138197 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz9dc\" (UniqueName: \"kubernetes.io/projected/8e827307-21c0-4712-ab98-e95d277f4201-kube-api-access-jz9dc\") pod \"redhat-operators-rmz42\" (UID: \"8e827307-21c0-4712-ab98-e95d277f4201\") " pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.147631 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.195919 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.195983 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.199694 4813 patch_prober.go:28] interesting pod/console-f9d7485db-l2l9m container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.199794 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-l2l9m" podUID="70da8a3c-ff49-4f82-a68b-d955c2cceb2b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.286953 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.317486 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wb7ln"] Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.318484 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.342117 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wb7ln"] Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.374045 4813 generic.go:334] "Generic (PLEG): container finished" podID="2d05c6fb-46ad-4722-b20f-c42bba042431" containerID="4c4314bf2eedbfc3fe285f440effd7d5474f69a6c7c4dd19e620c4426bf18f6b" exitCode=0 Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.374156 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gsnb" event={"ID":"2d05c6fb-46ad-4722-b20f-c42bba042431","Type":"ContainerDied","Data":"4c4314bf2eedbfc3fe285f440effd7d5474f69a6c7c4dd19e620c4426bf18f6b"} Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.374224 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gsnb" event={"ID":"2d05c6fb-46ad-4722-b20f-c42bba042431","Type":"ContainerStarted","Data":"1b70a931587f26b9b904b3e14f146f1f8b54c16ab51edec769931754bc7ebf6f"} Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.378186 4813 generic.go:334] "Generic (PLEG): container finished" podID="add7ab28-9545-450a-9423-dfff6734b0eb" containerID="ef131fe705b48f54c35f4c59d99c52f487e41230c61d64e78066e87de2c07cc6" exitCode=0 Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.378226 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzz2v" event={"ID":"add7ab28-9545-450a-9423-dfff6734b0eb","Type":"ContainerDied","Data":"ef131fe705b48f54c35f4c59d99c52f487e41230c61d64e78066e87de2c07cc6"} Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.378257 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzz2v" event={"ID":"add7ab28-9545-450a-9423-dfff6734b0eb","Type":"ContainerStarted","Data":"5d53ac12ea38e21abc8d93e426b4e454a1891fe9b6ecd617e9ade315bf72af3e"} Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.384089 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-x222k" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.418871 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0882a10e-939a-4cb5-862f-ba18fa5e4718-utilities\") pod \"redhat-operators-wb7ln\" (UID: \"0882a10e-939a-4cb5-862f-ba18fa5e4718\") " pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.419127 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pxlc\" (UniqueName: \"kubernetes.io/projected/0882a10e-939a-4cb5-862f-ba18fa5e4718-kube-api-access-7pxlc\") pod \"redhat-operators-wb7ln\" (UID: \"0882a10e-939a-4cb5-862f-ba18fa5e4718\") " pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.419178 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0882a10e-939a-4cb5-862f-ba18fa5e4718-catalog-content\") pod \"redhat-operators-wb7ln\" (UID: \"0882a10e-939a-4cb5-862f-ba18fa5e4718\") " pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.520905 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0882a10e-939a-4cb5-862f-ba18fa5e4718-utilities\") pod \"redhat-operators-wb7ln\" (UID: \"0882a10e-939a-4cb5-862f-ba18fa5e4718\") " pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.521012 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pxlc\" (UniqueName: \"kubernetes.io/projected/0882a10e-939a-4cb5-862f-ba18fa5e4718-kube-api-access-7pxlc\") pod \"redhat-operators-wb7ln\" (UID: \"0882a10e-939a-4cb5-862f-ba18fa5e4718\") " pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.521099 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0882a10e-939a-4cb5-862f-ba18fa5e4718-catalog-content\") pod \"redhat-operators-wb7ln\" (UID: \"0882a10e-939a-4cb5-862f-ba18fa5e4718\") " pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.523748 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0882a10e-939a-4cb5-862f-ba18fa5e4718-utilities\") pod \"redhat-operators-wb7ln\" (UID: \"0882a10e-939a-4cb5-862f-ba18fa5e4718\") " pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.525324 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0882a10e-939a-4cb5-862f-ba18fa5e4718-catalog-content\") pod \"redhat-operators-wb7ln\" (UID: \"0882a10e-939a-4cb5-862f-ba18fa5e4718\") " pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.561084 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmz42"] Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.566241 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pxlc\" (UniqueName: \"kubernetes.io/projected/0882a10e-939a-4cb5-862f-ba18fa5e4718-kube-api-access-7pxlc\") pod \"redhat-operators-wb7ln\" (UID: \"0882a10e-939a-4cb5-862f-ba18fa5e4718\") " pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:43:24 crc kubenswrapper[4813]: W0217 08:43:24.589565 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e827307_21c0_4712_ab98_e95d277f4201.slice/crio-9fca3f85be2114049942f5bed216d37fd90c06dd503e8a25c40cbfe16320d359 WatchSource:0}: Error finding container 9fca3f85be2114049942f5bed216d37fd90c06dd503e8a25c40cbfe16320d359: Status 404 returned error can't find the container with id 9fca3f85be2114049942f5bed216d37fd90c06dd503e8a25c40cbfe16320d359 Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.643056 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.723486 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.824830 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c45ae7e7-0aba-442e-85d3-c2a10878b247-kube-api-access\") pod \"c45ae7e7-0aba-442e-85d3-c2a10878b247\" (UID: \"c45ae7e7-0aba-442e-85d3-c2a10878b247\") " Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.825267 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c45ae7e7-0aba-442e-85d3-c2a10878b247-kubelet-dir\") pod \"c45ae7e7-0aba-442e-85d3-c2a10878b247\" (UID: \"c45ae7e7-0aba-442e-85d3-c2a10878b247\") " Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.825642 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c45ae7e7-0aba-442e-85d3-c2a10878b247-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c45ae7e7-0aba-442e-85d3-c2a10878b247" (UID: "c45ae7e7-0aba-442e-85d3-c2a10878b247"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.831617 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c45ae7e7-0aba-442e-85d3-c2a10878b247-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c45ae7e7-0aba-442e-85d3-c2a10878b247" (UID: "c45ae7e7-0aba-442e-85d3-c2a10878b247"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.840756 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.846610 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:24 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:24 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:24 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.846753 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.885951 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wb7ln"] Feb 17 08:43:24 crc kubenswrapper[4813]: W0217 08:43:24.899578 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0882a10e_939a_4cb5_862f_ba18fa5e4718.slice/crio-920434d3bd6dcd788fbdb9b835779e81f492425f021e00a8ee4c22917b70b0a1 WatchSource:0}: Error finding container 920434d3bd6dcd788fbdb9b835779e81f492425f021e00a8ee4c22917b70b0a1: Status 404 returned error can't find the container with id 920434d3bd6dcd788fbdb9b835779e81f492425f021e00a8ee4c22917b70b0a1 Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.927568 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c45ae7e7-0aba-442e-85d3-c2a10878b247-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 08:43:24 crc kubenswrapper[4813]: I0217 08:43:24.927597 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c45ae7e7-0aba-442e-85d3-c2a10878b247-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.386120 4813 generic.go:334] "Generic (PLEG): container finished" podID="8e827307-21c0-4712-ab98-e95d277f4201" containerID="344e85d426a04b2e57c1c865eb4b3afa875066f57bb1be7d44cdcfb0e486e69f" exitCode=0 Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.386239 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmz42" event={"ID":"8e827307-21c0-4712-ab98-e95d277f4201","Type":"ContainerDied","Data":"344e85d426a04b2e57c1c865eb4b3afa875066f57bb1be7d44cdcfb0e486e69f"} Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.387003 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmz42" event={"ID":"8e827307-21c0-4712-ab98-e95d277f4201","Type":"ContainerStarted","Data":"9fca3f85be2114049942f5bed216d37fd90c06dd503e8a25c40cbfe16320d359"} Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.389764 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c45ae7e7-0aba-442e-85d3-c2a10878b247","Type":"ContainerDied","Data":"fc97f573c9bb3fb0da4f94253546b792e9b6c3c1deba75254eb682be2654c18f"} Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.389804 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc97f573c9bb3fb0da4f94253546b792e9b6c3c1deba75254eb682be2654c18f" Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.389828 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.397332 4813 generic.go:334] "Generic (PLEG): container finished" podID="0882a10e-939a-4cb5-862f-ba18fa5e4718" containerID="c5a2a996d0dbf2391598880fa6d4b14869e149d2a9a7e38baa770ecbfba92452" exitCode=0 Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.397475 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wb7ln" event={"ID":"0882a10e-939a-4cb5-862f-ba18fa5e4718","Type":"ContainerDied","Data":"c5a2a996d0dbf2391598880fa6d4b14869e149d2a9a7e38baa770ecbfba92452"} Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.397512 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wb7ln" event={"ID":"0882a10e-939a-4cb5-862f-ba18fa5e4718","Type":"ContainerStarted","Data":"920434d3bd6dcd788fbdb9b835779e81f492425f021e00a8ee4c22917b70b0a1"} Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.731158 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 08:43:25 crc kubenswrapper[4813]: E0217 08:43:25.731449 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45ae7e7-0aba-442e-85d3-c2a10878b247" containerName="pruner" Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.731462 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45ae7e7-0aba-442e-85d3-c2a10878b247" containerName="pruner" Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.731564 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c45ae7e7-0aba-442e-85d3-c2a10878b247" containerName="pruner" Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.731985 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.735850 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.736397 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.737013 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.844741 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:25 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:25 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:25 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.844802 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.857969 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c794250-6513-48ea-8574-d72e4a7199a3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2c794250-6513-48ea-8574-d72e4a7199a3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.859393 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c794250-6513-48ea-8574-d72e4a7199a3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2c794250-6513-48ea-8574-d72e4a7199a3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.974903 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c794250-6513-48ea-8574-d72e4a7199a3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2c794250-6513-48ea-8574-d72e4a7199a3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.975034 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c794250-6513-48ea-8574-d72e4a7199a3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2c794250-6513-48ea-8574-d72e4a7199a3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 08:43:25 crc kubenswrapper[4813]: I0217 08:43:25.975486 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c794250-6513-48ea-8574-d72e4a7199a3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2c794250-6513-48ea-8574-d72e4a7199a3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 08:43:26 crc kubenswrapper[4813]: I0217 08:43:26.005832 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c794250-6513-48ea-8574-d72e4a7199a3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2c794250-6513-48ea-8574-d72e4a7199a3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 08:43:26 crc kubenswrapper[4813]: I0217 08:43:26.055118 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 08:43:26 crc kubenswrapper[4813]: I0217 08:43:26.843699 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:26 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:26 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:26 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:26 crc kubenswrapper[4813]: I0217 08:43:26.844020 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:26 crc kubenswrapper[4813]: I0217 08:43:26.927195 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 08:43:27 crc kubenswrapper[4813]: I0217 08:43:27.472891 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2c794250-6513-48ea-8574-d72e4a7199a3","Type":"ContainerStarted","Data":"f7b1f92cb4ed5da18f7675771f5f4eb9f4f50429db4c9f3988dcc38afc5db8ca"} Feb 17 08:43:27 crc kubenswrapper[4813]: I0217 08:43:27.857703 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:27 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:27 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:27 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:27 crc kubenswrapper[4813]: I0217 08:43:27.857763 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:28 crc kubenswrapper[4813]: I0217 08:43:28.843846 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:28 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:28 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:28 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:28 crc kubenswrapper[4813]: I0217 08:43:28.844137 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:29 crc kubenswrapper[4813]: I0217 08:43:29.496635 4813 generic.go:334] "Generic (PLEG): container finished" podID="2c794250-6513-48ea-8574-d72e4a7199a3" containerID="8ab0b59750c2bbd1daf022cb022f34f42a4837fd74add36c75c82defa3c320f4" exitCode=0 Feb 17 08:43:29 crc kubenswrapper[4813]: I0217 08:43:29.496675 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2c794250-6513-48ea-8574-d72e4a7199a3","Type":"ContainerDied","Data":"8ab0b59750c2bbd1daf022cb022f34f42a4837fd74add36c75c82defa3c320f4"} Feb 17 08:43:29 crc kubenswrapper[4813]: I0217 08:43:29.734947 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lmkjb" Feb 17 08:43:29 crc kubenswrapper[4813]: I0217 08:43:29.845014 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:29 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:29 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:29 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:29 crc kubenswrapper[4813]: I0217 08:43:29.845064 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:30 crc kubenswrapper[4813]: I0217 08:43:30.843434 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:30 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:30 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:30 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:30 crc kubenswrapper[4813]: I0217 08:43:30.843491 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:31 crc kubenswrapper[4813]: I0217 08:43:31.842170 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:31 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:31 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:31 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:31 crc kubenswrapper[4813]: I0217 08:43:31.842470 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:32 crc kubenswrapper[4813]: I0217 08:43:32.841950 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:32 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:32 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:32 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:32 crc kubenswrapper[4813]: I0217 08:43:32.841997 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:33 crc kubenswrapper[4813]: I0217 08:43:33.842504 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:33 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Feb 17 08:43:33 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:33 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:33 crc kubenswrapper[4813]: I0217 08:43:33.842830 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:34 crc kubenswrapper[4813]: I0217 08:43:34.117611 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-7rbpj" Feb 17 08:43:34 crc kubenswrapper[4813]: I0217 08:43:34.196386 4813 patch_prober.go:28] interesting pod/console-f9d7485db-l2l9m container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 17 08:43:34 crc kubenswrapper[4813]: I0217 08:43:34.196443 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-l2l9m" podUID="70da8a3c-ff49-4f82-a68b-d955c2cceb2b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 17 08:43:34 crc kubenswrapper[4813]: I0217 08:43:34.841606 4813 patch_prober.go:28] interesting pod/router-default-5444994796-z2wgd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 08:43:34 crc kubenswrapper[4813]: [+]has-synced ok Feb 17 08:43:34 crc kubenswrapper[4813]: [+]process-running ok Feb 17 08:43:34 crc kubenswrapper[4813]: healthz check failed Feb 17 08:43:34 crc kubenswrapper[4813]: I0217 08:43:34.841681 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-z2wgd" podUID="1d3c0419-3831-4d6b-ada5-cc6a73f8a176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 08:43:35 crc kubenswrapper[4813]: I0217 08:43:35.165948 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:43:35 crc kubenswrapper[4813]: I0217 08:43:35.166000 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:43:35 crc kubenswrapper[4813]: I0217 08:43:35.843010 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:35 crc kubenswrapper[4813]: I0217 08:43:35.845061 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-z2wgd" Feb 17 08:43:36 crc kubenswrapper[4813]: I0217 08:43:36.389377 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs\") pod \"network-metrics-daemon-srrq7\" (UID: \"b42b143b-e85b-44cc-a427-ba1ebd82c55b\") " pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:43:36 crc kubenswrapper[4813]: I0217 08:43:36.396653 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b42b143b-e85b-44cc-a427-ba1ebd82c55b-metrics-certs\") pod \"network-metrics-daemon-srrq7\" (UID: \"b42b143b-e85b-44cc-a427-ba1ebd82c55b\") " pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:43:36 crc kubenswrapper[4813]: I0217 08:43:36.449959 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-srrq7" Feb 17 08:43:40 crc kubenswrapper[4813]: I0217 08:43:40.613085 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 08:43:40 crc kubenswrapper[4813]: I0217 08:43:40.645935 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:43:40 crc kubenswrapper[4813]: I0217 08:43:40.754916 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c794250-6513-48ea-8574-d72e4a7199a3-kube-api-access\") pod \"2c794250-6513-48ea-8574-d72e4a7199a3\" (UID: \"2c794250-6513-48ea-8574-d72e4a7199a3\") " Feb 17 08:43:40 crc kubenswrapper[4813]: I0217 08:43:40.755127 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c794250-6513-48ea-8574-d72e4a7199a3-kubelet-dir\") pod \"2c794250-6513-48ea-8574-d72e4a7199a3\" (UID: \"2c794250-6513-48ea-8574-d72e4a7199a3\") " Feb 17 08:43:40 crc kubenswrapper[4813]: I0217 08:43:40.755271 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c794250-6513-48ea-8574-d72e4a7199a3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2c794250-6513-48ea-8574-d72e4a7199a3" (UID: "2c794250-6513-48ea-8574-d72e4a7199a3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:43:40 crc kubenswrapper[4813]: I0217 08:43:40.755963 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c794250-6513-48ea-8574-d72e4a7199a3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 08:43:40 crc kubenswrapper[4813]: I0217 08:43:40.768005 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c794250-6513-48ea-8574-d72e4a7199a3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2c794250-6513-48ea-8574-d72e4a7199a3" (UID: "2c794250-6513-48ea-8574-d72e4a7199a3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:43:40 crc kubenswrapper[4813]: I0217 08:43:40.857429 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c794250-6513-48ea-8574-d72e4a7199a3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 08:43:41 crc kubenswrapper[4813]: I0217 08:43:41.564678 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2c794250-6513-48ea-8574-d72e4a7199a3","Type":"ContainerDied","Data":"f7b1f92cb4ed5da18f7675771f5f4eb9f4f50429db4c9f3988dcc38afc5db8ca"} Feb 17 08:43:41 crc kubenswrapper[4813]: I0217 08:43:41.564720 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7b1f92cb4ed5da18f7675771f5f4eb9f4f50429db4c9f3988dcc38afc5db8ca" Feb 17 08:43:41 crc kubenswrapper[4813]: I0217 08:43:41.564803 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 08:43:44 crc kubenswrapper[4813]: I0217 08:43:44.200599 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:44 crc kubenswrapper[4813]: I0217 08:43:44.205516 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:43:51 crc kubenswrapper[4813]: I0217 08:43:51.175581 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 08:43:51 crc kubenswrapper[4813]: E0217 08:43:51.322777 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 17 08:43:51 crc kubenswrapper[4813]: E0217 08:43:51.323161 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42jvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4gsnb_openshift-marketplace(2d05c6fb-46ad-4722-b20f-c42bba042431): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 08:43:51 crc kubenswrapper[4813]: E0217 08:43:51.324540 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4gsnb" podUID="2d05c6fb-46ad-4722-b20f-c42bba042431" Feb 17 08:43:51 crc kubenswrapper[4813]: E0217 08:43:51.434565 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 17 08:43:51 crc kubenswrapper[4813]: E0217 08:43:51.434688 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tsvpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xxj57_openshift-marketplace(064c46bd-0e88-4dca-9a42-923b3eae48a1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 08:43:51 crc kubenswrapper[4813]: E0217 08:43:51.435838 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xxj57" podUID="064c46bd-0e88-4dca-9a42-923b3eae48a1" Feb 17 08:43:51 crc kubenswrapper[4813]: I0217 08:43:51.612254 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhzww" event={"ID":"12b56a13-1891-46e8-9f6a-0045496cb7ee","Type":"ContainerStarted","Data":"87fcda2bd48e8ca5f9c38f8b822574f84334554fb0dae21d0d8e3bc76e6f3825"} Feb 17 08:43:51 crc kubenswrapper[4813]: I0217 08:43:51.615464 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w257s" event={"ID":"0ae65165-a983-4b8c-8478-55c0853def8a","Type":"ContainerStarted","Data":"60c429a3afbfb7a38ab268e508a090397f7100b2e1ce2cbb78c33c6327ee58bd"} Feb 17 08:43:51 crc kubenswrapper[4813]: E0217 08:43:51.619241 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4gsnb" podUID="2d05c6fb-46ad-4722-b20f-c42bba042431" Feb 17 08:43:51 crc kubenswrapper[4813]: E0217 08:43:51.620957 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xxj57" podUID="064c46bd-0e88-4dca-9a42-923b3eae48a1" Feb 17 08:43:51 crc kubenswrapper[4813]: I0217 08:43:51.693658 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-srrq7"] Feb 17 08:43:51 crc kubenswrapper[4813]: W0217 08:43:51.704462 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb42b143b_e85b_44cc_a427_ba1ebd82c55b.slice/crio-41b66fb4ec29935a38addc9905ae4cd145e12134759ee463020408b9dfa58fe3 WatchSource:0}: Error finding container 41b66fb4ec29935a38addc9905ae4cd145e12134759ee463020408b9dfa58fe3: Status 404 returned error can't find the container with id 41b66fb4ec29935a38addc9905ae4cd145e12134759ee463020408b9dfa58fe3 Feb 17 08:43:52 crc kubenswrapper[4813]: I0217 08:43:52.624471 4813 generic.go:334] "Generic (PLEG): container finished" podID="0d0624db-755c-4a56-afd4-02eeb8b8b1db" containerID="77b7a176934f3ad72e8c51cf0834c606954d6a54689f82f318890249deefea05" exitCode=0 Feb 17 08:43:52 crc kubenswrapper[4813]: I0217 08:43:52.624510 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wtt9" event={"ID":"0d0624db-755c-4a56-afd4-02eeb8b8b1db","Type":"ContainerDied","Data":"77b7a176934f3ad72e8c51cf0834c606954d6a54689f82f318890249deefea05"} Feb 17 08:43:52 crc kubenswrapper[4813]: I0217 08:43:52.628380 4813 generic.go:334] "Generic (PLEG): container finished" podID="0ae65165-a983-4b8c-8478-55c0853def8a" containerID="60c429a3afbfb7a38ab268e508a090397f7100b2e1ce2cbb78c33c6327ee58bd" exitCode=0 Feb 17 08:43:52 crc kubenswrapper[4813]: I0217 08:43:52.628435 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w257s" event={"ID":"0ae65165-a983-4b8c-8478-55c0853def8a","Type":"ContainerDied","Data":"60c429a3afbfb7a38ab268e508a090397f7100b2e1ce2cbb78c33c6327ee58bd"} Feb 17 08:43:52 crc kubenswrapper[4813]: I0217 08:43:52.640335 4813 generic.go:334] "Generic (PLEG): container finished" podID="0882a10e-939a-4cb5-862f-ba18fa5e4718" containerID="4e4c933907538c29c38b69d809979884dc016f67d9f17b433993ea0b66eead67" exitCode=0 Feb 17 08:43:52 crc kubenswrapper[4813]: I0217 08:43:52.640474 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wb7ln" event={"ID":"0882a10e-939a-4cb5-862f-ba18fa5e4718","Type":"ContainerDied","Data":"4e4c933907538c29c38b69d809979884dc016f67d9f17b433993ea0b66eead67"} Feb 17 08:43:52 crc kubenswrapper[4813]: I0217 08:43:52.646676 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-srrq7" event={"ID":"b42b143b-e85b-44cc-a427-ba1ebd82c55b","Type":"ContainerStarted","Data":"8e5b72c0f0d69bcead8f488bc896579432d823fe68226d44a5470db0e08051b3"} Feb 17 08:43:52 crc kubenswrapper[4813]: I0217 08:43:52.646736 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-srrq7" event={"ID":"b42b143b-e85b-44cc-a427-ba1ebd82c55b","Type":"ContainerStarted","Data":"41b66fb4ec29935a38addc9905ae4cd145e12134759ee463020408b9dfa58fe3"} Feb 17 08:43:52 crc kubenswrapper[4813]: I0217 08:43:52.649776 4813 generic.go:334] "Generic (PLEG): container finished" podID="add7ab28-9545-450a-9423-dfff6734b0eb" containerID="a710f9d459a982d496159193b5afda45b493d4a37bafc65af6ca4f3c0100e098" exitCode=0 Feb 17 08:43:52 crc kubenswrapper[4813]: I0217 08:43:52.649848 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzz2v" event={"ID":"add7ab28-9545-450a-9423-dfff6734b0eb","Type":"ContainerDied","Data":"a710f9d459a982d496159193b5afda45b493d4a37bafc65af6ca4f3c0100e098"} Feb 17 08:43:52 crc kubenswrapper[4813]: I0217 08:43:52.659671 4813 generic.go:334] "Generic (PLEG): container finished" podID="8e827307-21c0-4712-ab98-e95d277f4201" containerID="d6db625f8c74eaae1978f56183b9212e1e63f473a3de392ff0ecc21bd0d3f5d9" exitCode=0 Feb 17 08:43:52 crc kubenswrapper[4813]: I0217 08:43:52.659733 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmz42" event={"ID":"8e827307-21c0-4712-ab98-e95d277f4201","Type":"ContainerDied","Data":"d6db625f8c74eaae1978f56183b9212e1e63f473a3de392ff0ecc21bd0d3f5d9"} Feb 17 08:43:52 crc kubenswrapper[4813]: I0217 08:43:52.662047 4813 generic.go:334] "Generic (PLEG): container finished" podID="12b56a13-1891-46e8-9f6a-0045496cb7ee" containerID="87fcda2bd48e8ca5f9c38f8b822574f84334554fb0dae21d0d8e3bc76e6f3825" exitCode=0 Feb 17 08:43:52 crc kubenswrapper[4813]: I0217 08:43:52.662086 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhzww" event={"ID":"12b56a13-1891-46e8-9f6a-0045496cb7ee","Type":"ContainerDied","Data":"87fcda2bd48e8ca5f9c38f8b822574f84334554fb0dae21d0d8e3bc76e6f3825"} Feb 17 08:43:53 crc kubenswrapper[4813]: I0217 08:43:53.669410 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-srrq7" event={"ID":"b42b143b-e85b-44cc-a427-ba1ebd82c55b","Type":"ContainerStarted","Data":"e73f6213d33399b21fa7ec0e5909e586721beb2bdc6b136e98173368de2d6582"} Feb 17 08:43:53 crc kubenswrapper[4813]: I0217 08:43:53.685735 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-srrq7" podStartSLOduration=160.685700644 podStartE2EDuration="2m40.685700644s" podCreationTimestamp="2026-02-17 08:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:43:53.684213053 +0000 UTC m=+181.344974276" watchObservedRunningTime="2026-02-17 08:43:53.685700644 +0000 UTC m=+181.346461867" Feb 17 08:43:54 crc kubenswrapper[4813]: I0217 08:43:54.366323 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nx689" Feb 17 08:43:55 crc kubenswrapper[4813]: I0217 08:43:55.681651 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhzww" event={"ID":"12b56a13-1891-46e8-9f6a-0045496cb7ee","Type":"ContainerStarted","Data":"ca1af306025340fe048039c3775a09cee342d248092eaba5d99283754c308c59"} Feb 17 08:43:55 crc kubenswrapper[4813]: I0217 08:43:55.700484 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jhzww" podStartSLOduration=2.413836481 podStartE2EDuration="34.700467288s" podCreationTimestamp="2026-02-17 08:43:21 +0000 UTC" firstStartedPulling="2026-02-17 08:43:22.332614462 +0000 UTC m=+149.993375685" lastFinishedPulling="2026-02-17 08:43:54.619245259 +0000 UTC m=+182.280006492" observedRunningTime="2026-02-17 08:43:55.699491401 +0000 UTC m=+183.360252624" watchObservedRunningTime="2026-02-17 08:43:55.700467288 +0000 UTC m=+183.361228511" Feb 17 08:43:57 crc kubenswrapper[4813]: I0217 08:43:57.693345 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w257s" event={"ID":"0ae65165-a983-4b8c-8478-55c0853def8a","Type":"ContainerStarted","Data":"ba5b621ec23f0ed52eabe886153cbab47128e63befa25879860a158bf1670f58"} Feb 17 08:43:58 crc kubenswrapper[4813]: I0217 08:43:58.706420 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wtt9" event={"ID":"0d0624db-755c-4a56-afd4-02eeb8b8b1db","Type":"ContainerStarted","Data":"d037ce5c8afd370190c173ec98769ac0e4da95899b207f0578fbfd1dc4d38b89"} Feb 17 08:43:58 crc kubenswrapper[4813]: I0217 08:43:58.709559 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wb7ln" event={"ID":"0882a10e-939a-4cb5-862f-ba18fa5e4718","Type":"ContainerStarted","Data":"df94a7240b217db10142b633ea83813743f7eb2f7906c3fd86d74fc0f743c02d"} Feb 17 08:43:58 crc kubenswrapper[4813]: I0217 08:43:58.711335 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzz2v" event={"ID":"add7ab28-9545-450a-9423-dfff6734b0eb","Type":"ContainerStarted","Data":"1b8d994beb2ac2a03627cae7021469dd03a6d321102013bd979e411dfb214e35"} Feb 17 08:43:58 crc kubenswrapper[4813]: I0217 08:43:58.713570 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmz42" event={"ID":"8e827307-21c0-4712-ab98-e95d277f4201","Type":"ContainerStarted","Data":"b6d54696fca16c0cacedd62121ef45d55b1ef45dc2f8383dae888c42a867f7be"} Feb 17 08:43:58 crc kubenswrapper[4813]: I0217 08:43:58.739625 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4wtt9" podStartSLOduration=3.714951319 podStartE2EDuration="38.739611789s" podCreationTimestamp="2026-02-17 08:43:20 +0000 UTC" firstStartedPulling="2026-02-17 08:43:22.338960866 +0000 UTC m=+149.999722089" lastFinishedPulling="2026-02-17 08:43:57.363621336 +0000 UTC m=+185.024382559" observedRunningTime="2026-02-17 08:43:58.737624865 +0000 UTC m=+186.398386088" watchObservedRunningTime="2026-02-17 08:43:58.739611789 +0000 UTC m=+186.400373012" Feb 17 08:43:58 crc kubenswrapper[4813]: I0217 08:43:58.740537 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w257s" podStartSLOduration=3.646323014 podStartE2EDuration="37.740532555s" podCreationTimestamp="2026-02-17 08:43:21 +0000 UTC" firstStartedPulling="2026-02-17 08:43:22.310428823 +0000 UTC m=+149.971190086" lastFinishedPulling="2026-02-17 08:43:56.404638364 +0000 UTC m=+184.065399627" observedRunningTime="2026-02-17 08:43:57.711996042 +0000 UTC m=+185.372757345" watchObservedRunningTime="2026-02-17 08:43:58.740532555 +0000 UTC m=+186.401293778" Feb 17 08:43:58 crc kubenswrapper[4813]: I0217 08:43:58.760837 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xzz2v" podStartSLOduration=1.9977101419999999 podStartE2EDuration="35.760823592s" podCreationTimestamp="2026-02-17 08:43:23 +0000 UTC" firstStartedPulling="2026-02-17 08:43:24.385928223 +0000 UTC m=+152.046689446" lastFinishedPulling="2026-02-17 08:43:58.149041663 +0000 UTC m=+185.809802896" observedRunningTime="2026-02-17 08:43:58.758606881 +0000 UTC m=+186.419368104" watchObservedRunningTime="2026-02-17 08:43:58.760823592 +0000 UTC m=+186.421584815" Feb 17 08:43:58 crc kubenswrapper[4813]: I0217 08:43:58.836611 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rmz42" podStartSLOduration=2.973786894 podStartE2EDuration="35.836596412s" podCreationTimestamp="2026-02-17 08:43:23 +0000 UTC" firstStartedPulling="2026-02-17 08:43:25.392443551 +0000 UTC m=+153.053204774" lastFinishedPulling="2026-02-17 08:43:58.255253069 +0000 UTC m=+185.916014292" observedRunningTime="2026-02-17 08:43:58.830912116 +0000 UTC m=+186.491673339" watchObservedRunningTime="2026-02-17 08:43:58.836596412 +0000 UTC m=+186.497357635" Feb 17 08:43:58 crc kubenswrapper[4813]: I0217 08:43:58.857869 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wb7ln" podStartSLOduration=2.176888791 podStartE2EDuration="34.857850116s" podCreationTimestamp="2026-02-17 08:43:24 +0000 UTC" firstStartedPulling="2026-02-17 08:43:25.401395597 +0000 UTC m=+153.062156820" lastFinishedPulling="2026-02-17 08:43:58.082356882 +0000 UTC m=+185.743118145" observedRunningTime="2026-02-17 08:43:58.855527922 +0000 UTC m=+186.516289145" watchObservedRunningTime="2026-02-17 08:43:58.857850116 +0000 UTC m=+186.518611339" Feb 17 08:44:01 crc kubenswrapper[4813]: I0217 08:44:01.245838 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:44:01 crc kubenswrapper[4813]: I0217 08:44:01.246136 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:44:01 crc kubenswrapper[4813]: I0217 08:44:01.394466 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:44:01 crc kubenswrapper[4813]: I0217 08:44:01.503945 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w257s" Feb 17 08:44:01 crc kubenswrapper[4813]: I0217 08:44:01.503987 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w257s" Feb 17 08:44:01 crc kubenswrapper[4813]: I0217 08:44:01.565501 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w257s" Feb 17 08:44:01 crc kubenswrapper[4813]: I0217 08:44:01.666612 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:44:01 crc kubenswrapper[4813]: I0217 08:44:01.666678 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:44:01 crc kubenswrapper[4813]: I0217 08:44:01.716503 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:44:01 crc kubenswrapper[4813]: I0217 08:44:01.779269 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:44:01 crc kubenswrapper[4813]: I0217 08:44:01.781828 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w257s" Feb 17 08:44:02 crc kubenswrapper[4813]: I0217 08:44:02.571820 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-29bxl"] Feb 17 08:44:03 crc kubenswrapper[4813]: I0217 08:44:03.462649 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:44:03 crc kubenswrapper[4813]: I0217 08:44:03.462982 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:44:03 crc kubenswrapper[4813]: I0217 08:44:03.517078 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:44:03 crc kubenswrapper[4813]: I0217 08:44:03.792967 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:44:03 crc kubenswrapper[4813]: I0217 08:44:03.931243 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jhzww"] Feb 17 08:44:03 crc kubenswrapper[4813]: I0217 08:44:03.931505 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jhzww" podUID="12b56a13-1891-46e8-9f6a-0045496cb7ee" containerName="registry-server" containerID="cri-o://ca1af306025340fe048039c3775a09cee342d248092eaba5d99283754c308c59" gracePeriod=2 Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.288064 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.289523 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.643402 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.643449 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.745459 4813 generic.go:334] "Generic (PLEG): container finished" podID="12b56a13-1891-46e8-9f6a-0045496cb7ee" containerID="ca1af306025340fe048039c3775a09cee342d248092eaba5d99283754c308c59" exitCode=0 Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.746537 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhzww" event={"ID":"12b56a13-1891-46e8-9f6a-0045496cb7ee","Type":"ContainerDied","Data":"ca1af306025340fe048039c3775a09cee342d248092eaba5d99283754c308c59"} Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.923019 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 08:44:04 crc kubenswrapper[4813]: E0217 08:44:04.923466 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c794250-6513-48ea-8574-d72e4a7199a3" containerName="pruner" Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.923477 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c794250-6513-48ea-8574-d72e4a7199a3" containerName="pruner" Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.923583 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c794250-6513-48ea-8574-d72e4a7199a3" containerName="pruner" Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.923914 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.927631 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.928005 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.936385 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w257s"] Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.936751 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w257s" podUID="0ae65165-a983-4b8c-8478-55c0853def8a" containerName="registry-server" containerID="cri-o://ba5b621ec23f0ed52eabe886153cbab47128e63befa25879860a158bf1670f58" gracePeriod=2 Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.969568 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09dbf2a1-c636-4b64-9576-9abf75ca8734-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"09dbf2a1-c636-4b64-9576-9abf75ca8734\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.969676 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09dbf2a1-c636-4b64-9576-9abf75ca8734-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"09dbf2a1-c636-4b64-9576-9abf75ca8734\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.973029 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 08:44:04 crc kubenswrapper[4813]: I0217 08:44:04.998872 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.070392 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09dbf2a1-c636-4b64-9576-9abf75ca8734-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"09dbf2a1-c636-4b64-9576-9abf75ca8734\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.070461 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09dbf2a1-c636-4b64-9576-9abf75ca8734-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"09dbf2a1-c636-4b64-9576-9abf75ca8734\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.070558 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09dbf2a1-c636-4b64-9576-9abf75ca8734-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"09dbf2a1-c636-4b64-9576-9abf75ca8734\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.088990 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09dbf2a1-c636-4b64-9576-9abf75ca8734-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"09dbf2a1-c636-4b64-9576-9abf75ca8734\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.166268 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.166374 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.171481 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t86sj\" (UniqueName: \"kubernetes.io/projected/12b56a13-1891-46e8-9f6a-0045496cb7ee-kube-api-access-t86sj\") pod \"12b56a13-1891-46e8-9f6a-0045496cb7ee\" (UID: \"12b56a13-1891-46e8-9f6a-0045496cb7ee\") " Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.171527 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12b56a13-1891-46e8-9f6a-0045496cb7ee-catalog-content\") pod \"12b56a13-1891-46e8-9f6a-0045496cb7ee\" (UID: \"12b56a13-1891-46e8-9f6a-0045496cb7ee\") " Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.171654 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12b56a13-1891-46e8-9f6a-0045496cb7ee-utilities\") pod \"12b56a13-1891-46e8-9f6a-0045496cb7ee\" (UID: \"12b56a13-1891-46e8-9f6a-0045496cb7ee\") " Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.172760 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12b56a13-1891-46e8-9f6a-0045496cb7ee-utilities" (OuterVolumeSpecName: "utilities") pod "12b56a13-1891-46e8-9f6a-0045496cb7ee" (UID: "12b56a13-1891-46e8-9f6a-0045496cb7ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.175221 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b56a13-1891-46e8-9f6a-0045496cb7ee-kube-api-access-t86sj" (OuterVolumeSpecName: "kube-api-access-t86sj") pod "12b56a13-1891-46e8-9f6a-0045496cb7ee" (UID: "12b56a13-1891-46e8-9f6a-0045496cb7ee"). InnerVolumeSpecName "kube-api-access-t86sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.244723 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12b56a13-1891-46e8-9f6a-0045496cb7ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12b56a13-1891-46e8-9f6a-0045496cb7ee" (UID: "12b56a13-1891-46e8-9f6a-0045496cb7ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.255369 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.273366 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12b56a13-1891-46e8-9f6a-0045496cb7ee-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.273404 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t86sj\" (UniqueName: \"kubernetes.io/projected/12b56a13-1891-46e8-9f6a-0045496cb7ee-kube-api-access-t86sj\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.273418 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12b56a13-1891-46e8-9f6a-0045496cb7ee-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.351365 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rmz42" podUID="8e827307-21c0-4712-ab98-e95d277f4201" containerName="registry-server" probeResult="failure" output=< Feb 17 08:44:05 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 17 08:44:05 crc kubenswrapper[4813]: > Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.684042 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wb7ln" podUID="0882a10e-939a-4cb5-862f-ba18fa5e4718" containerName="registry-server" probeResult="failure" output=< Feb 17 08:44:05 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 17 08:44:05 crc kubenswrapper[4813]: > Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.695025 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.756559 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jhzww" event={"ID":"12b56a13-1891-46e8-9f6a-0045496cb7ee","Type":"ContainerDied","Data":"734990c438a97ed38fbbee6afd835b60060edc26edf09fc279a0d4829c7bb5a4"} Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.756575 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jhzww" Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.756630 4813 scope.go:117] "RemoveContainer" containerID="ca1af306025340fe048039c3775a09cee342d248092eaba5d99283754c308c59" Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.758227 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"09dbf2a1-c636-4b64-9576-9abf75ca8734","Type":"ContainerStarted","Data":"480e1e442e1947b16be4b06314809d40502cfa193aa81ec3e388158b04a7cddc"} Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.763689 4813 generic.go:334] "Generic (PLEG): container finished" podID="0ae65165-a983-4b8c-8478-55c0853def8a" containerID="ba5b621ec23f0ed52eabe886153cbab47128e63befa25879860a158bf1670f58" exitCode=0 Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.763750 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w257s" event={"ID":"0ae65165-a983-4b8c-8478-55c0853def8a","Type":"ContainerDied","Data":"ba5b621ec23f0ed52eabe886153cbab47128e63befa25879860a158bf1670f58"} Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.792737 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jhzww"] Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.793508 4813 scope.go:117] "RemoveContainer" containerID="87fcda2bd48e8ca5f9c38f8b822574f84334554fb0dae21d0d8e3bc76e6f3825" Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.796690 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jhzww"] Feb 17 08:44:05 crc kubenswrapper[4813]: I0217 08:44:05.815366 4813 scope.go:117] "RemoveContainer" containerID="43a483c1ff801d6a1ee37a1bf2e1525272df2e03534aab4011f0ff1fcc00e5eb" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.085501 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w257s" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.285214 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ae65165-a983-4b8c-8478-55c0853def8a-utilities\") pod \"0ae65165-a983-4b8c-8478-55c0853def8a\" (UID: \"0ae65165-a983-4b8c-8478-55c0853def8a\") " Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.285286 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npwj4\" (UniqueName: \"kubernetes.io/projected/0ae65165-a983-4b8c-8478-55c0853def8a-kube-api-access-npwj4\") pod \"0ae65165-a983-4b8c-8478-55c0853def8a\" (UID: \"0ae65165-a983-4b8c-8478-55c0853def8a\") " Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.285391 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ae65165-a983-4b8c-8478-55c0853def8a-catalog-content\") pod \"0ae65165-a983-4b8c-8478-55c0853def8a\" (UID: \"0ae65165-a983-4b8c-8478-55c0853def8a\") " Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.286352 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ae65165-a983-4b8c-8478-55c0853def8a-utilities" (OuterVolumeSpecName: "utilities") pod "0ae65165-a983-4b8c-8478-55c0853def8a" (UID: "0ae65165-a983-4b8c-8478-55c0853def8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.293513 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae65165-a983-4b8c-8478-55c0853def8a-kube-api-access-npwj4" (OuterVolumeSpecName: "kube-api-access-npwj4") pod "0ae65165-a983-4b8c-8478-55c0853def8a" (UID: "0ae65165-a983-4b8c-8478-55c0853def8a"). InnerVolumeSpecName "kube-api-access-npwj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.333247 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xzz2v"] Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.333465 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xzz2v" podUID="add7ab28-9545-450a-9423-dfff6734b0eb" containerName="registry-server" containerID="cri-o://1b8d994beb2ac2a03627cae7021469dd03a6d321102013bd979e411dfb214e35" gracePeriod=2 Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.350599 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ae65165-a983-4b8c-8478-55c0853def8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ae65165-a983-4b8c-8478-55c0853def8a" (UID: "0ae65165-a983-4b8c-8478-55c0853def8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.387921 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ae65165-a983-4b8c-8478-55c0853def8a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.388094 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ae65165-a983-4b8c-8478-55c0853def8a-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.388236 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npwj4\" (UniqueName: \"kubernetes.io/projected/0ae65165-a983-4b8c-8478-55c0853def8a-kube-api-access-npwj4\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.691112 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.772725 4813 generic.go:334] "Generic (PLEG): container finished" podID="09dbf2a1-c636-4b64-9576-9abf75ca8734" containerID="2d4db80baea34bb2c2604960a0dcd4cce9ae51bdc23728f61f2960c285421084" exitCode=0 Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.773131 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"09dbf2a1-c636-4b64-9576-9abf75ca8734","Type":"ContainerDied","Data":"2d4db80baea34bb2c2604960a0dcd4cce9ae51bdc23728f61f2960c285421084"} Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.775053 4813 generic.go:334] "Generic (PLEG): container finished" podID="2d05c6fb-46ad-4722-b20f-c42bba042431" containerID="7c2cb3c9fb68d6d232cd1a79cafa404753117d1be70d1f20dce835f12a38982a" exitCode=0 Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.775124 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gsnb" event={"ID":"2d05c6fb-46ad-4722-b20f-c42bba042431","Type":"ContainerDied","Data":"7c2cb3c9fb68d6d232cd1a79cafa404753117d1be70d1f20dce835f12a38982a"} Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.779140 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w257s" event={"ID":"0ae65165-a983-4b8c-8478-55c0853def8a","Type":"ContainerDied","Data":"6080a785ef1b930aa4c920d3839200a2c3fa2471c1e198d3d22ce74aee1ebc98"} Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.779186 4813 scope.go:117] "RemoveContainer" containerID="ba5b621ec23f0ed52eabe886153cbab47128e63befa25879860a158bf1670f58" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.779352 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w257s" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.787360 4813 generic.go:334] "Generic (PLEG): container finished" podID="add7ab28-9545-450a-9423-dfff6734b0eb" containerID="1b8d994beb2ac2a03627cae7021469dd03a6d321102013bd979e411dfb214e35" exitCode=0 Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.787410 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzz2v" event={"ID":"add7ab28-9545-450a-9423-dfff6734b0eb","Type":"ContainerDied","Data":"1b8d994beb2ac2a03627cae7021469dd03a6d321102013bd979e411dfb214e35"} Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.787433 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzz2v" event={"ID":"add7ab28-9545-450a-9423-dfff6734b0eb","Type":"ContainerDied","Data":"5d53ac12ea38e21abc8d93e426b4e454a1891fe9b6ecd617e9ade315bf72af3e"} Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.787481 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xzz2v" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.792229 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add7ab28-9545-450a-9423-dfff6734b0eb-utilities\") pod \"add7ab28-9545-450a-9423-dfff6734b0eb\" (UID: \"add7ab28-9545-450a-9423-dfff6734b0eb\") " Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.792335 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add7ab28-9545-450a-9423-dfff6734b0eb-catalog-content\") pod \"add7ab28-9545-450a-9423-dfff6734b0eb\" (UID: \"add7ab28-9545-450a-9423-dfff6734b0eb\") " Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.792470 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql8n4\" (UniqueName: \"kubernetes.io/projected/add7ab28-9545-450a-9423-dfff6734b0eb-kube-api-access-ql8n4\") pod \"add7ab28-9545-450a-9423-dfff6734b0eb\" (UID: \"add7ab28-9545-450a-9423-dfff6734b0eb\") " Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.793779 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/add7ab28-9545-450a-9423-dfff6734b0eb-utilities" (OuterVolumeSpecName: "utilities") pod "add7ab28-9545-450a-9423-dfff6734b0eb" (UID: "add7ab28-9545-450a-9423-dfff6734b0eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.805674 4813 scope.go:117] "RemoveContainer" containerID="60c429a3afbfb7a38ab268e508a090397f7100b2e1ce2cbb78c33c6327ee58bd" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.815347 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/add7ab28-9545-450a-9423-dfff6734b0eb-kube-api-access-ql8n4" (OuterVolumeSpecName: "kube-api-access-ql8n4") pod "add7ab28-9545-450a-9423-dfff6734b0eb" (UID: "add7ab28-9545-450a-9423-dfff6734b0eb"). InnerVolumeSpecName "kube-api-access-ql8n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.831045 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/add7ab28-9545-450a-9423-dfff6734b0eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "add7ab28-9545-450a-9423-dfff6734b0eb" (UID: "add7ab28-9545-450a-9423-dfff6734b0eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.831597 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w257s"] Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.834085 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w257s"] Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.838499 4813 scope.go:117] "RemoveContainer" containerID="4146cbc3815fe78cff4f29ffb66f38a4e8d0a094dc47dcec3c77f297dd866610" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.852345 4813 scope.go:117] "RemoveContainer" containerID="1b8d994beb2ac2a03627cae7021469dd03a6d321102013bd979e411dfb214e35" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.864454 4813 scope.go:117] "RemoveContainer" containerID="a710f9d459a982d496159193b5afda45b493d4a37bafc65af6ca4f3c0100e098" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.882480 4813 scope.go:117] "RemoveContainer" containerID="ef131fe705b48f54c35f4c59d99c52f487e41230c61d64e78066e87de2c07cc6" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.893766 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add7ab28-9545-450a-9423-dfff6734b0eb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.894059 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql8n4\" (UniqueName: \"kubernetes.io/projected/add7ab28-9545-450a-9423-dfff6734b0eb-kube-api-access-ql8n4\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.894231 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add7ab28-9545-450a-9423-dfff6734b0eb-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.896183 4813 scope.go:117] "RemoveContainer" containerID="1b8d994beb2ac2a03627cae7021469dd03a6d321102013bd979e411dfb214e35" Feb 17 08:44:06 crc kubenswrapper[4813]: E0217 08:44:06.897288 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8d994beb2ac2a03627cae7021469dd03a6d321102013bd979e411dfb214e35\": container with ID starting with 1b8d994beb2ac2a03627cae7021469dd03a6d321102013bd979e411dfb214e35 not found: ID does not exist" containerID="1b8d994beb2ac2a03627cae7021469dd03a6d321102013bd979e411dfb214e35" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.897349 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8d994beb2ac2a03627cae7021469dd03a6d321102013bd979e411dfb214e35"} err="failed to get container status \"1b8d994beb2ac2a03627cae7021469dd03a6d321102013bd979e411dfb214e35\": rpc error: code = NotFound desc = could not find container \"1b8d994beb2ac2a03627cae7021469dd03a6d321102013bd979e411dfb214e35\": container with ID starting with 1b8d994beb2ac2a03627cae7021469dd03a6d321102013bd979e411dfb214e35 not found: ID does not exist" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.897401 4813 scope.go:117] "RemoveContainer" containerID="a710f9d459a982d496159193b5afda45b493d4a37bafc65af6ca4f3c0100e098" Feb 17 08:44:06 crc kubenswrapper[4813]: E0217 08:44:06.897755 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a710f9d459a982d496159193b5afda45b493d4a37bafc65af6ca4f3c0100e098\": container with ID starting with a710f9d459a982d496159193b5afda45b493d4a37bafc65af6ca4f3c0100e098 not found: ID does not exist" containerID="a710f9d459a982d496159193b5afda45b493d4a37bafc65af6ca4f3c0100e098" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.897799 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a710f9d459a982d496159193b5afda45b493d4a37bafc65af6ca4f3c0100e098"} err="failed to get container status \"a710f9d459a982d496159193b5afda45b493d4a37bafc65af6ca4f3c0100e098\": rpc error: code = NotFound desc = could not find container \"a710f9d459a982d496159193b5afda45b493d4a37bafc65af6ca4f3c0100e098\": container with ID starting with a710f9d459a982d496159193b5afda45b493d4a37bafc65af6ca4f3c0100e098 not found: ID does not exist" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.897833 4813 scope.go:117] "RemoveContainer" containerID="ef131fe705b48f54c35f4c59d99c52f487e41230c61d64e78066e87de2c07cc6" Feb 17 08:44:06 crc kubenswrapper[4813]: E0217 08:44:06.898606 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef131fe705b48f54c35f4c59d99c52f487e41230c61d64e78066e87de2c07cc6\": container with ID starting with ef131fe705b48f54c35f4c59d99c52f487e41230c61d64e78066e87de2c07cc6 not found: ID does not exist" containerID="ef131fe705b48f54c35f4c59d99c52f487e41230c61d64e78066e87de2c07cc6" Feb 17 08:44:06 crc kubenswrapper[4813]: I0217 08:44:06.898637 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef131fe705b48f54c35f4c59d99c52f487e41230c61d64e78066e87de2c07cc6"} err="failed to get container status \"ef131fe705b48f54c35f4c59d99c52f487e41230c61d64e78066e87de2c07cc6\": rpc error: code = NotFound desc = could not find container \"ef131fe705b48f54c35f4c59d99c52f487e41230c61d64e78066e87de2c07cc6\": container with ID starting with ef131fe705b48f54c35f4c59d99c52f487e41230c61d64e78066e87de2c07cc6 not found: ID does not exist" Feb 17 08:44:07 crc kubenswrapper[4813]: I0217 08:44:07.132909 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ae65165-a983-4b8c-8478-55c0853def8a" path="/var/lib/kubelet/pods/0ae65165-a983-4b8c-8478-55c0853def8a/volumes" Feb 17 08:44:07 crc kubenswrapper[4813]: I0217 08:44:07.134759 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12b56a13-1891-46e8-9f6a-0045496cb7ee" path="/var/lib/kubelet/pods/12b56a13-1891-46e8-9f6a-0045496cb7ee/volumes" Feb 17 08:44:07 crc kubenswrapper[4813]: I0217 08:44:07.218061 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xzz2v"] Feb 17 08:44:07 crc kubenswrapper[4813]: I0217 08:44:07.223491 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xzz2v"] Feb 17 08:44:07 crc kubenswrapper[4813]: I0217 08:44:07.806397 4813 generic.go:334] "Generic (PLEG): container finished" podID="064c46bd-0e88-4dca-9a42-923b3eae48a1" containerID="2116145942fec1752312a92527c4d74f372ee8ecc5a3e25fb70c753717d5e013" exitCode=0 Feb 17 08:44:07 crc kubenswrapper[4813]: I0217 08:44:07.806487 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxj57" event={"ID":"064c46bd-0e88-4dca-9a42-923b3eae48a1","Type":"ContainerDied","Data":"2116145942fec1752312a92527c4d74f372ee8ecc5a3e25fb70c753717d5e013"} Feb 17 08:44:07 crc kubenswrapper[4813]: I0217 08:44:07.813327 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gsnb" event={"ID":"2d05c6fb-46ad-4722-b20f-c42bba042431","Type":"ContainerStarted","Data":"ad4fdf53a758baaa50bd5c737693ab65c95a85d5e157efc5e3549e7c1d3dc9c4"} Feb 17 08:44:07 crc kubenswrapper[4813]: I0217 08:44:07.854856 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4gsnb" podStartSLOduration=3.067668934 podStartE2EDuration="45.854829713s" podCreationTimestamp="2026-02-17 08:43:22 +0000 UTC" firstStartedPulling="2026-02-17 08:43:24.379422355 +0000 UTC m=+152.040183578" lastFinishedPulling="2026-02-17 08:44:07.166583124 +0000 UTC m=+194.827344357" observedRunningTime="2026-02-17 08:44:07.853765793 +0000 UTC m=+195.514527026" watchObservedRunningTime="2026-02-17 08:44:07.854829713 +0000 UTC m=+195.515590966" Feb 17 08:44:08 crc kubenswrapper[4813]: I0217 08:44:08.127075 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 08:44:08 crc kubenswrapper[4813]: I0217 08:44:08.310980 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09dbf2a1-c636-4b64-9576-9abf75ca8734-kube-api-access\") pod \"09dbf2a1-c636-4b64-9576-9abf75ca8734\" (UID: \"09dbf2a1-c636-4b64-9576-9abf75ca8734\") " Feb 17 08:44:08 crc kubenswrapper[4813]: I0217 08:44:08.311052 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09dbf2a1-c636-4b64-9576-9abf75ca8734-kubelet-dir\") pod \"09dbf2a1-c636-4b64-9576-9abf75ca8734\" (UID: \"09dbf2a1-c636-4b64-9576-9abf75ca8734\") " Feb 17 08:44:08 crc kubenswrapper[4813]: I0217 08:44:08.311293 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09dbf2a1-c636-4b64-9576-9abf75ca8734-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "09dbf2a1-c636-4b64-9576-9abf75ca8734" (UID: "09dbf2a1-c636-4b64-9576-9abf75ca8734"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:44:08 crc kubenswrapper[4813]: I0217 08:44:08.316760 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09dbf2a1-c636-4b64-9576-9abf75ca8734-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "09dbf2a1-c636-4b64-9576-9abf75ca8734" (UID: "09dbf2a1-c636-4b64-9576-9abf75ca8734"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:44:08 crc kubenswrapper[4813]: I0217 08:44:08.412301 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09dbf2a1-c636-4b64-9576-9abf75ca8734-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:08 crc kubenswrapper[4813]: I0217 08:44:08.412345 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09dbf2a1-c636-4b64-9576-9abf75ca8734-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:08 crc kubenswrapper[4813]: I0217 08:44:08.823976 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxj57" event={"ID":"064c46bd-0e88-4dca-9a42-923b3eae48a1","Type":"ContainerStarted","Data":"1662064a66d8580fa819f268f1a2ce9ef5b5c9d2640eb653249ffae22c043fd6"} Feb 17 08:44:08 crc kubenswrapper[4813]: I0217 08:44:08.826437 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"09dbf2a1-c636-4b64-9576-9abf75ca8734","Type":"ContainerDied","Data":"480e1e442e1947b16be4b06314809d40502cfa193aa81ec3e388158b04a7cddc"} Feb 17 08:44:08 crc kubenswrapper[4813]: I0217 08:44:08.826499 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="480e1e442e1947b16be4b06314809d40502cfa193aa81ec3e388158b04a7cddc" Feb 17 08:44:08 crc kubenswrapper[4813]: I0217 08:44:08.826510 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 08:44:08 crc kubenswrapper[4813]: I0217 08:44:08.854601 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xxj57" podStartSLOduration=2.968245065 podStartE2EDuration="48.854577968s" podCreationTimestamp="2026-02-17 08:43:20 +0000 UTC" firstStartedPulling="2026-02-17 08:43:22.324348555 +0000 UTC m=+149.985109808" lastFinishedPulling="2026-02-17 08:44:08.210681488 +0000 UTC m=+195.871442711" observedRunningTime="2026-02-17 08:44:08.852273834 +0000 UTC m=+196.513035097" watchObservedRunningTime="2026-02-17 08:44:08.854577968 +0000 UTC m=+196.515339201" Feb 17 08:44:09 crc kubenswrapper[4813]: I0217 08:44:09.142044 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="add7ab28-9545-450a-9423-dfff6734b0eb" path="/var/lib/kubelet/pods/add7ab28-9545-450a-9423-dfff6734b0eb/volumes" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.059598 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.059952 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.309628 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.322117 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 08:44:11 crc kubenswrapper[4813]: E0217 08:44:11.322517 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae65165-a983-4b8c-8478-55c0853def8a" containerName="extract-utilities" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.322553 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae65165-a983-4b8c-8478-55c0853def8a" containerName="extract-utilities" Feb 17 08:44:11 crc kubenswrapper[4813]: E0217 08:44:11.322572 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add7ab28-9545-450a-9423-dfff6734b0eb" containerName="extract-content" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.322585 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="add7ab28-9545-450a-9423-dfff6734b0eb" containerName="extract-content" Feb 17 08:44:11 crc kubenswrapper[4813]: E0217 08:44:11.322598 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b56a13-1891-46e8-9f6a-0045496cb7ee" containerName="extract-utilities" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.322613 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b56a13-1891-46e8-9f6a-0045496cb7ee" containerName="extract-utilities" Feb 17 08:44:11 crc kubenswrapper[4813]: E0217 08:44:11.322635 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae65165-a983-4b8c-8478-55c0853def8a" containerName="extract-content" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.322651 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae65165-a983-4b8c-8478-55c0853def8a" containerName="extract-content" Feb 17 08:44:11 crc kubenswrapper[4813]: E0217 08:44:11.322677 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09dbf2a1-c636-4b64-9576-9abf75ca8734" containerName="pruner" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.322695 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="09dbf2a1-c636-4b64-9576-9abf75ca8734" containerName="pruner" Feb 17 08:44:11 crc kubenswrapper[4813]: E0217 08:44:11.322726 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add7ab28-9545-450a-9423-dfff6734b0eb" containerName="registry-server" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.322740 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="add7ab28-9545-450a-9423-dfff6734b0eb" containerName="registry-server" Feb 17 08:44:11 crc kubenswrapper[4813]: E0217 08:44:11.322759 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b56a13-1891-46e8-9f6a-0045496cb7ee" containerName="extract-content" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.322794 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b56a13-1891-46e8-9f6a-0045496cb7ee" containerName="extract-content" Feb 17 08:44:11 crc kubenswrapper[4813]: E0217 08:44:11.322807 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add7ab28-9545-450a-9423-dfff6734b0eb" containerName="extract-utilities" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.322819 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="add7ab28-9545-450a-9423-dfff6734b0eb" containerName="extract-utilities" Feb 17 08:44:11 crc kubenswrapper[4813]: E0217 08:44:11.322832 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae65165-a983-4b8c-8478-55c0853def8a" containerName="registry-server" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.322844 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae65165-a983-4b8c-8478-55c0853def8a" containerName="registry-server" Feb 17 08:44:11 crc kubenswrapper[4813]: E0217 08:44:11.322860 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b56a13-1891-46e8-9f6a-0045496cb7ee" containerName="registry-server" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.322872 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b56a13-1891-46e8-9f6a-0045496cb7ee" containerName="registry-server" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.323034 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="09dbf2a1-c636-4b64-9576-9abf75ca8734" containerName="pruner" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.323064 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b56a13-1891-46e8-9f6a-0045496cb7ee" containerName="registry-server" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.323098 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="add7ab28-9545-450a-9423-dfff6734b0eb" containerName="registry-server" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.323113 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae65165-a983-4b8c-8478-55c0853def8a" containerName="registry-server" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.323819 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.329105 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.329513 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.342438 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.450958 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56f75abd-b269-471e-bf17-66e5c0afb5dd-var-lock\") pod \"installer-9-crc\" (UID: \"56f75abd-b269-471e-bf17-66e5c0afb5dd\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.451430 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56f75abd-b269-471e-bf17-66e5c0afb5dd-kubelet-dir\") pod \"installer-9-crc\" (UID: \"56f75abd-b269-471e-bf17-66e5c0afb5dd\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.451585 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56f75abd-b269-471e-bf17-66e5c0afb5dd-kube-api-access\") pod \"installer-9-crc\" (UID: \"56f75abd-b269-471e-bf17-66e5c0afb5dd\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.553076 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56f75abd-b269-471e-bf17-66e5c0afb5dd-kube-api-access\") pod \"installer-9-crc\" (UID: \"56f75abd-b269-471e-bf17-66e5c0afb5dd\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.553188 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56f75abd-b269-471e-bf17-66e5c0afb5dd-var-lock\") pod \"installer-9-crc\" (UID: \"56f75abd-b269-471e-bf17-66e5c0afb5dd\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.553340 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56f75abd-b269-471e-bf17-66e5c0afb5dd-kubelet-dir\") pod \"installer-9-crc\" (UID: \"56f75abd-b269-471e-bf17-66e5c0afb5dd\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.553428 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56f75abd-b269-471e-bf17-66e5c0afb5dd-kubelet-dir\") pod \"installer-9-crc\" (UID: \"56f75abd-b269-471e-bf17-66e5c0afb5dd\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.553497 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56f75abd-b269-471e-bf17-66e5c0afb5dd-var-lock\") pod \"installer-9-crc\" (UID: \"56f75abd-b269-471e-bf17-66e5c0afb5dd\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.581502 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56f75abd-b269-471e-bf17-66e5c0afb5dd-kube-api-access\") pod \"installer-9-crc\" (UID: \"56f75abd-b269-471e-bf17-66e5c0afb5dd\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.684062 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 08:44:11 crc kubenswrapper[4813]: I0217 08:44:11.936934 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 08:44:12 crc kubenswrapper[4813]: I0217 08:44:12.134470 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xxj57" podUID="064c46bd-0e88-4dca-9a42-923b3eae48a1" containerName="registry-server" probeResult="failure" output=< Feb 17 08:44:12 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 17 08:44:12 crc kubenswrapper[4813]: > Feb 17 08:44:12 crc kubenswrapper[4813]: I0217 08:44:12.858013 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"56f75abd-b269-471e-bf17-66e5c0afb5dd","Type":"ContainerStarted","Data":"1317e61a50aab2e6520fd32526837b5292f3ed24ca83773ad092a678a81391b7"} Feb 17 08:44:12 crc kubenswrapper[4813]: I0217 08:44:12.858411 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"56f75abd-b269-471e-bf17-66e5c0afb5dd","Type":"ContainerStarted","Data":"2fa714e3fcdcc6b76419714137ec5c65e3da4c9c5fe51285d58ee5195ffb9e93"} Feb 17 08:44:12 crc kubenswrapper[4813]: I0217 08:44:12.882388 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.882362225 podStartE2EDuration="1.882362225s" podCreationTimestamp="2026-02-17 08:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:44:12.880873204 +0000 UTC m=+200.541634467" watchObservedRunningTime="2026-02-17 08:44:12.882362225 +0000 UTC m=+200.543123488" Feb 17 08:44:13 crc kubenswrapper[4813]: I0217 08:44:13.047281 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:44:13 crc kubenswrapper[4813]: I0217 08:44:13.047464 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:44:13 crc kubenswrapper[4813]: I0217 08:44:13.131698 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:44:13 crc kubenswrapper[4813]: I0217 08:44:13.932974 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:44:14 crc kubenswrapper[4813]: I0217 08:44:14.354227 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:44:14 crc kubenswrapper[4813]: I0217 08:44:14.425757 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:44:14 crc kubenswrapper[4813]: I0217 08:44:14.713219 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:44:14 crc kubenswrapper[4813]: I0217 08:44:14.780287 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:44:15 crc kubenswrapper[4813]: I0217 08:44:15.932854 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wb7ln"] Feb 17 08:44:15 crc kubenswrapper[4813]: I0217 08:44:15.934556 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wb7ln" podUID="0882a10e-939a-4cb5-862f-ba18fa5e4718" containerName="registry-server" containerID="cri-o://df94a7240b217db10142b633ea83813743f7eb2f7906c3fd86d74fc0f743c02d" gracePeriod=2 Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.376516 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.517783 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0882a10e-939a-4cb5-862f-ba18fa5e4718-catalog-content\") pod \"0882a10e-939a-4cb5-862f-ba18fa5e4718\" (UID: \"0882a10e-939a-4cb5-862f-ba18fa5e4718\") " Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.518001 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0882a10e-939a-4cb5-862f-ba18fa5e4718-utilities\") pod \"0882a10e-939a-4cb5-862f-ba18fa5e4718\" (UID: \"0882a10e-939a-4cb5-862f-ba18fa5e4718\") " Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.518024 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pxlc\" (UniqueName: \"kubernetes.io/projected/0882a10e-939a-4cb5-862f-ba18fa5e4718-kube-api-access-7pxlc\") pod \"0882a10e-939a-4cb5-862f-ba18fa5e4718\" (UID: \"0882a10e-939a-4cb5-862f-ba18fa5e4718\") " Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.519468 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0882a10e-939a-4cb5-862f-ba18fa5e4718-utilities" (OuterVolumeSpecName: "utilities") pod "0882a10e-939a-4cb5-862f-ba18fa5e4718" (UID: "0882a10e-939a-4cb5-862f-ba18fa5e4718"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.524533 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0882a10e-939a-4cb5-862f-ba18fa5e4718-kube-api-access-7pxlc" (OuterVolumeSpecName: "kube-api-access-7pxlc") pod "0882a10e-939a-4cb5-862f-ba18fa5e4718" (UID: "0882a10e-939a-4cb5-862f-ba18fa5e4718"). InnerVolumeSpecName "kube-api-access-7pxlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.620011 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pxlc\" (UniqueName: \"kubernetes.io/projected/0882a10e-939a-4cb5-862f-ba18fa5e4718-kube-api-access-7pxlc\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.620473 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0882a10e-939a-4cb5-862f-ba18fa5e4718-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.699927 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0882a10e-939a-4cb5-862f-ba18fa5e4718-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0882a10e-939a-4cb5-862f-ba18fa5e4718" (UID: "0882a10e-939a-4cb5-862f-ba18fa5e4718"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.722128 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0882a10e-939a-4cb5-862f-ba18fa5e4718-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.888680 4813 generic.go:334] "Generic (PLEG): container finished" podID="0882a10e-939a-4cb5-862f-ba18fa5e4718" containerID="df94a7240b217db10142b633ea83813743f7eb2f7906c3fd86d74fc0f743c02d" exitCode=0 Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.888737 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wb7ln" event={"ID":"0882a10e-939a-4cb5-862f-ba18fa5e4718","Type":"ContainerDied","Data":"df94a7240b217db10142b633ea83813743f7eb2f7906c3fd86d74fc0f743c02d"} Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.888777 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wb7ln" event={"ID":"0882a10e-939a-4cb5-862f-ba18fa5e4718","Type":"ContainerDied","Data":"920434d3bd6dcd788fbdb9b835779e81f492425f021e00a8ee4c22917b70b0a1"} Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.888806 4813 scope.go:117] "RemoveContainer" containerID="df94a7240b217db10142b633ea83813743f7eb2f7906c3fd86d74fc0f743c02d" Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.888842 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wb7ln" Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.940674 4813 scope.go:117] "RemoveContainer" containerID="4e4c933907538c29c38b69d809979884dc016f67d9f17b433993ea0b66eead67" Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.952346 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wb7ln"] Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.958048 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wb7ln"] Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.970255 4813 scope.go:117] "RemoveContainer" containerID="c5a2a996d0dbf2391598880fa6d4b14869e149d2a9a7e38baa770ecbfba92452" Feb 17 08:44:16 crc kubenswrapper[4813]: I0217 08:44:16.999250 4813 scope.go:117] "RemoveContainer" containerID="df94a7240b217db10142b633ea83813743f7eb2f7906c3fd86d74fc0f743c02d" Feb 17 08:44:17 crc kubenswrapper[4813]: E0217 08:44:17.001729 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df94a7240b217db10142b633ea83813743f7eb2f7906c3fd86d74fc0f743c02d\": container with ID starting with df94a7240b217db10142b633ea83813743f7eb2f7906c3fd86d74fc0f743c02d not found: ID does not exist" containerID="df94a7240b217db10142b633ea83813743f7eb2f7906c3fd86d74fc0f743c02d" Feb 17 08:44:17 crc kubenswrapper[4813]: I0217 08:44:17.001764 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df94a7240b217db10142b633ea83813743f7eb2f7906c3fd86d74fc0f743c02d"} err="failed to get container status \"df94a7240b217db10142b633ea83813743f7eb2f7906c3fd86d74fc0f743c02d\": rpc error: code = NotFound desc = could not find container \"df94a7240b217db10142b633ea83813743f7eb2f7906c3fd86d74fc0f743c02d\": container with ID starting with df94a7240b217db10142b633ea83813743f7eb2f7906c3fd86d74fc0f743c02d not found: ID does not exist" Feb 17 08:44:17 crc kubenswrapper[4813]: I0217 08:44:17.001789 4813 scope.go:117] "RemoveContainer" containerID="4e4c933907538c29c38b69d809979884dc016f67d9f17b433993ea0b66eead67" Feb 17 08:44:17 crc kubenswrapper[4813]: E0217 08:44:17.002125 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e4c933907538c29c38b69d809979884dc016f67d9f17b433993ea0b66eead67\": container with ID starting with 4e4c933907538c29c38b69d809979884dc016f67d9f17b433993ea0b66eead67 not found: ID does not exist" containerID="4e4c933907538c29c38b69d809979884dc016f67d9f17b433993ea0b66eead67" Feb 17 08:44:17 crc kubenswrapper[4813]: I0217 08:44:17.002148 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e4c933907538c29c38b69d809979884dc016f67d9f17b433993ea0b66eead67"} err="failed to get container status \"4e4c933907538c29c38b69d809979884dc016f67d9f17b433993ea0b66eead67\": rpc error: code = NotFound desc = could not find container \"4e4c933907538c29c38b69d809979884dc016f67d9f17b433993ea0b66eead67\": container with ID starting with 4e4c933907538c29c38b69d809979884dc016f67d9f17b433993ea0b66eead67 not found: ID does not exist" Feb 17 08:44:17 crc kubenswrapper[4813]: I0217 08:44:17.002161 4813 scope.go:117] "RemoveContainer" containerID="c5a2a996d0dbf2391598880fa6d4b14869e149d2a9a7e38baa770ecbfba92452" Feb 17 08:44:17 crc kubenswrapper[4813]: E0217 08:44:17.002406 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a2a996d0dbf2391598880fa6d4b14869e149d2a9a7e38baa770ecbfba92452\": container with ID starting with c5a2a996d0dbf2391598880fa6d4b14869e149d2a9a7e38baa770ecbfba92452 not found: ID does not exist" containerID="c5a2a996d0dbf2391598880fa6d4b14869e149d2a9a7e38baa770ecbfba92452" Feb 17 08:44:17 crc kubenswrapper[4813]: I0217 08:44:17.002437 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a2a996d0dbf2391598880fa6d4b14869e149d2a9a7e38baa770ecbfba92452"} err="failed to get container status \"c5a2a996d0dbf2391598880fa6d4b14869e149d2a9a7e38baa770ecbfba92452\": rpc error: code = NotFound desc = could not find container \"c5a2a996d0dbf2391598880fa6d4b14869e149d2a9a7e38baa770ecbfba92452\": container with ID starting with c5a2a996d0dbf2391598880fa6d4b14869e149d2a9a7e38baa770ecbfba92452 not found: ID does not exist" Feb 17 08:44:17 crc kubenswrapper[4813]: I0217 08:44:17.119634 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0882a10e-939a-4cb5-862f-ba18fa5e4718" path="/var/lib/kubelet/pods/0882a10e-939a-4cb5-862f-ba18fa5e4718/volumes" Feb 17 08:44:21 crc kubenswrapper[4813]: I0217 08:44:21.140083 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:44:21 crc kubenswrapper[4813]: I0217 08:44:21.210297 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:44:27 crc kubenswrapper[4813]: I0217 08:44:27.604537 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" podUID="bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" containerName="oauth-openshift" containerID="cri-o://8b3de59e92b5b22e3b789566eff99c9775a3238906ef5388c289b16b40f7d712" gracePeriod=15 Feb 17 08:44:27 crc kubenswrapper[4813]: I0217 08:44:27.979010 4813 generic.go:334] "Generic (PLEG): container finished" podID="bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" containerID="8b3de59e92b5b22e3b789566eff99c9775a3238906ef5388c289b16b40f7d712" exitCode=0 Feb 17 08:44:27 crc kubenswrapper[4813]: I0217 08:44:27.979127 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" event={"ID":"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f","Type":"ContainerDied","Data":"8b3de59e92b5b22e3b789566eff99c9775a3238906ef5388c289b16b40f7d712"} Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.104158 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.129149 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-trusted-ca-bundle\") pod \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.129273 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-error\") pod \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.129383 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-serving-cert\") pod \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.129448 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-provider-selection\") pod \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.129506 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-idp-0-file-data\") pod \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.129602 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-audit-dir\") pod \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.129721 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-audit-policies\") pod \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.129776 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-router-certs\") pod \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.129824 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-ocp-branding-template\") pod \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.129878 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" (UID: "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.129930 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4bnt\" (UniqueName: \"kubernetes.io/projected/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-kube-api-access-m4bnt\") pod \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.129990 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-session\") pod \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.130076 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-cliconfig\") pod \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.130126 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-login\") pod \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.130188 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-service-ca\") pod \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\" (UID: \"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f\") " Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.130651 4813 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.130795 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" (UID: "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.130882 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" (UID: "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.131455 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" (UID: "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.132286 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" (UID: "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.141063 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-kube-api-access-m4bnt" (OuterVolumeSpecName: "kube-api-access-m4bnt") pod "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" (UID: "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f"). InnerVolumeSpecName "kube-api-access-m4bnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.145082 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" (UID: "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.156449 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" (UID: "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.158473 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" (UID: "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.161288 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" (UID: "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.161594 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" (UID: "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.161823 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" (UID: "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.162108 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" (UID: "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.162278 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" (UID: "bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.231655 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.231914 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.232002 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.232102 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.232183 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.232278 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.232416 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.232511 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.232591 4813 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.232669 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.232748 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.232841 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4bnt\" (UniqueName: \"kubernetes.io/projected/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-kube-api-access-m4bnt\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.232940 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.988233 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" event={"ID":"bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f","Type":"ContainerDied","Data":"02fc91d89346497e7d8d5b1c6cfa7d33ab48103418d017484f4d71b6fb8f8e97"} Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.988702 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-29bxl" Feb 17 08:44:28 crc kubenswrapper[4813]: I0217 08:44:28.989281 4813 scope.go:117] "RemoveContainer" containerID="8b3de59e92b5b22e3b789566eff99c9775a3238906ef5388c289b16b40f7d712" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.036409 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-29bxl"] Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.039712 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-29bxl"] Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.119735 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" path="/var/lib/kubelet/pods/bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f/volumes" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.453148 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7559487fb5-b6st7"] Feb 17 08:44:29 crc kubenswrapper[4813]: E0217 08:44:29.453391 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" containerName="oauth-openshift" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.453413 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" containerName="oauth-openshift" Feb 17 08:44:29 crc kubenswrapper[4813]: E0217 08:44:29.453425 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0882a10e-939a-4cb5-862f-ba18fa5e4718" containerName="extract-content" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.453433 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0882a10e-939a-4cb5-862f-ba18fa5e4718" containerName="extract-content" Feb 17 08:44:29 crc kubenswrapper[4813]: E0217 08:44:29.453441 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0882a10e-939a-4cb5-862f-ba18fa5e4718" containerName="registry-server" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.453449 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0882a10e-939a-4cb5-862f-ba18fa5e4718" containerName="registry-server" Feb 17 08:44:29 crc kubenswrapper[4813]: E0217 08:44:29.453468 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0882a10e-939a-4cb5-862f-ba18fa5e4718" containerName="extract-utilities" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.453476 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0882a10e-939a-4cb5-862f-ba18fa5e4718" containerName="extract-utilities" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.453615 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0882a10e-939a-4cb5-862f-ba18fa5e4718" containerName="registry-server" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.453630 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbbf80bf-29d8-4d8a-a8f6-1a1d04f4ac7f" containerName="oauth-openshift" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.454105 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.457701 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.458355 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.458688 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.459057 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.459468 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.459627 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.459779 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.459908 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.460059 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.460179 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.460794 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.461029 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.464061 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.466007 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.473650 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.475928 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7559487fb5-b6st7"] Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.547914 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-service-ca\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.547967 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-session\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.547988 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.548013 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.548033 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-audit-dir\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.548053 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-router-certs\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.548069 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.548092 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.548107 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-user-template-login\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.548124 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.548140 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.548169 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-user-template-error\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.548190 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znqzm\" (UniqueName: \"kubernetes.io/projected/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-kube-api-access-znqzm\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.548203 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-audit-policies\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.649102 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-user-template-error\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.649180 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znqzm\" (UniqueName: \"kubernetes.io/projected/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-kube-api-access-znqzm\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.649221 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-audit-policies\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.649261 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-session\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.650395 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-audit-policies\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.650420 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-service-ca\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.650497 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.650703 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.650779 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-audit-dir\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.650896 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-router-certs\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.650945 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.650982 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-audit-dir\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.651025 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.651073 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-user-template-login\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.651122 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.651190 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.652343 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.652363 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-service-ca\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.654766 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.655039 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-router-certs\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.655604 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-session\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.655610 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-user-template-login\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.655954 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.657660 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-user-template-error\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.658283 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.667058 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.670524 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.688648 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znqzm\" (UniqueName: \"kubernetes.io/projected/5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394-kube-api-access-znqzm\") pod \"oauth-openshift-7559487fb5-b6st7\" (UID: \"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394\") " pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:29 crc kubenswrapper[4813]: I0217 08:44:29.772278 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:30 crc kubenswrapper[4813]: I0217 08:44:30.067736 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7559487fb5-b6st7"] Feb 17 08:44:30 crc kubenswrapper[4813]: W0217 08:44:30.074411 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ef5b22d_de78_42c1_bbc4_a1ce0f4dc394.slice/crio-5bb573593b4516bd79c075b5caaf0fe63a054803410a29e09c1af74694b9ad22 WatchSource:0}: Error finding container 5bb573593b4516bd79c075b5caaf0fe63a054803410a29e09c1af74694b9ad22: Status 404 returned error can't find the container with id 5bb573593b4516bd79c075b5caaf0fe63a054803410a29e09c1af74694b9ad22 Feb 17 08:44:31 crc kubenswrapper[4813]: I0217 08:44:31.006871 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" event={"ID":"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394","Type":"ContainerStarted","Data":"44f780891f800b8c7672abd832bea5efb2cab4c595c9fe65c6736b19090106a6"} Feb 17 08:44:31 crc kubenswrapper[4813]: I0217 08:44:31.007283 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" event={"ID":"5ef5b22d-de78-42c1-bbc4-a1ce0f4dc394","Type":"ContainerStarted","Data":"5bb573593b4516bd79c075b5caaf0fe63a054803410a29e09c1af74694b9ad22"} Feb 17 08:44:31 crc kubenswrapper[4813]: I0217 08:44:31.007349 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:31 crc kubenswrapper[4813]: I0217 08:44:31.010880 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" Feb 17 08:44:31 crc kubenswrapper[4813]: I0217 08:44:31.074243 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7559487fb5-b6st7" podStartSLOduration=29.074212115 podStartE2EDuration="29.074212115s" podCreationTimestamp="2026-02-17 08:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:44:31.036363395 +0000 UTC m=+218.697124658" watchObservedRunningTime="2026-02-17 08:44:31.074212115 +0000 UTC m=+218.734973388" Feb 17 08:44:35 crc kubenswrapper[4813]: I0217 08:44:35.165146 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:44:35 crc kubenswrapper[4813]: I0217 08:44:35.165578 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:44:35 crc kubenswrapper[4813]: I0217 08:44:35.165641 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:44:35 crc kubenswrapper[4813]: I0217 08:44:35.166483 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1"} pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 08:44:35 crc kubenswrapper[4813]: I0217 08:44:35.166591 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" containerID="cri-o://e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1" gracePeriod=600 Feb 17 08:44:35 crc kubenswrapper[4813]: E0217 08:44:35.297273 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a6ba827_b08b_4163_b067_d9adb119398d.slice/crio-conmon-e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a6ba827_b08b_4163_b067_d9adb119398d.slice/crio-e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1.scope\": RecentStats: unable to find data in memory cache]" Feb 17 08:44:36 crc kubenswrapper[4813]: I0217 08:44:36.051587 4813 generic.go:334] "Generic (PLEG): container finished" podID="3a6ba827-b08b-4163-b067-d9adb119398d" containerID="e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1" exitCode=0 Feb 17 08:44:36 crc kubenswrapper[4813]: I0217 08:44:36.051719 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerDied","Data":"e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1"} Feb 17 08:44:36 crc kubenswrapper[4813]: I0217 08:44:36.052027 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerStarted","Data":"926e335d47cd84ab4eb72c1a31c7d4369f614aaae2415534ee97ac4b058875f2"} Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.600204 4813 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 08:44:50 crc kubenswrapper[4813]: E0217 08:44:50.600822 4813 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.601834 4813 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.601990 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.602092 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f" gracePeriod=15 Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.602171 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc" gracePeriod=15 Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.602213 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54" gracePeriod=15 Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.602310 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393" gracePeriod=15 Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.602387 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c" gracePeriod=15 Feb 17 08:44:50 crc kubenswrapper[4813]: E0217 08:44:50.667426 4813 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.681485 4813 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 08:44:50 crc kubenswrapper[4813]: E0217 08:44:50.681679 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.681690 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 08:44:50 crc kubenswrapper[4813]: E0217 08:44:50.681704 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.681710 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 08:44:50 crc kubenswrapper[4813]: E0217 08:44:50.681719 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.681725 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 08:44:50 crc kubenswrapper[4813]: E0217 08:44:50.681731 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.681737 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 08:44:50 crc kubenswrapper[4813]: E0217 08:44:50.681745 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.681751 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 08:44:50 crc kubenswrapper[4813]: E0217 08:44:50.681761 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.681766 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 08:44:50 crc kubenswrapper[4813]: E0217 08:44:50.681773 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.681779 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.681855 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.681865 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.681873 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.681878 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.681886 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.681895 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.714027 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.714069 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.714108 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.714378 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.714421 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.816227 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.816335 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.816360 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.816380 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.816396 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.816411 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.816425 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.816443 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.816444 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.816503 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.816514 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.816539 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.816560 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.937240 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.937284 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.937327 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.937426 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.937443 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.937464 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:44:50 crc kubenswrapper[4813]: I0217 08:44:50.968643 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:51 crc kubenswrapper[4813]: E0217 08:44:51.052443 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894fc3cb071cea7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 08:44:51.051933351 +0000 UTC m=+238.712694594,LastTimestamp:2026-02-17 08:44:51.051933351 +0000 UTC m=+238.712694594,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 08:44:51 crc kubenswrapper[4813]: I0217 08:44:51.156187 4813 generic.go:334] "Generic (PLEG): container finished" podID="56f75abd-b269-471e-bf17-66e5c0afb5dd" containerID="1317e61a50aab2e6520fd32526837b5292f3ed24ca83773ad092a678a81391b7" exitCode=0 Feb 17 08:44:51 crc kubenswrapper[4813]: I0217 08:44:51.156272 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"56f75abd-b269-471e-bf17-66e5c0afb5dd","Type":"ContainerDied","Data":"1317e61a50aab2e6520fd32526837b5292f3ed24ca83773ad092a678a81391b7"} Feb 17 08:44:51 crc kubenswrapper[4813]: I0217 08:44:51.157372 4813 status_manager.go:851] "Failed to get status for pod" podUID="56f75abd-b269-471e-bf17-66e5c0afb5dd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:51 crc kubenswrapper[4813]: I0217 08:44:51.160188 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 08:44:51 crc kubenswrapper[4813]: I0217 08:44:51.161463 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 08:44:51 crc kubenswrapper[4813]: I0217 08:44:51.162227 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393" exitCode=0 Feb 17 08:44:51 crc kubenswrapper[4813]: I0217 08:44:51.162363 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc" exitCode=0 Feb 17 08:44:51 crc kubenswrapper[4813]: I0217 08:44:51.162386 4813 scope.go:117] "RemoveContainer" containerID="5a1280cbe3dc24e6a49232a1b901588c2c0506439d12950df0e063285a9241ad" Feb 17 08:44:51 crc kubenswrapper[4813]: I0217 08:44:51.162461 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54" exitCode=0 Feb 17 08:44:51 crc kubenswrapper[4813]: I0217 08:44:51.162668 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c" exitCode=2 Feb 17 08:44:51 crc kubenswrapper[4813]: I0217 08:44:51.163567 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"601cfc9db9004aa038e698dd5fe3a66f18a6d66fa8470f6582b3a5d5510c3d3a"} Feb 17 08:44:51 crc kubenswrapper[4813]: E0217 08:44:51.632953 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894fc3cb071cea7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 08:44:51.051933351 +0000 UTC m=+238.712694594,LastTimestamp:2026-02-17 08:44:51.051933351 +0000 UTC m=+238.712694594,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 08:44:52 crc kubenswrapper[4813]: I0217 08:44:52.171505 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 08:44:52 crc kubenswrapper[4813]: I0217 08:44:52.174707 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"46603adae48d144608c4e88831928dd9429e74024a7de4e19a8584eedc7715c7"} Feb 17 08:44:52 crc kubenswrapper[4813]: E0217 08:44:52.177513 4813 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:52 crc kubenswrapper[4813]: I0217 08:44:52.177945 4813 status_manager.go:851] "Failed to get status for pod" podUID="56f75abd-b269-471e-bf17-66e5c0afb5dd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:52 crc kubenswrapper[4813]: I0217 08:44:52.442696 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 08:44:52 crc kubenswrapper[4813]: I0217 08:44:52.443412 4813 status_manager.go:851] "Failed to get status for pod" podUID="56f75abd-b269-471e-bf17-66e5c0afb5dd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:52 crc kubenswrapper[4813]: I0217 08:44:52.572214 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56f75abd-b269-471e-bf17-66e5c0afb5dd-kubelet-dir\") pod \"56f75abd-b269-471e-bf17-66e5c0afb5dd\" (UID: \"56f75abd-b269-471e-bf17-66e5c0afb5dd\") " Feb 17 08:44:52 crc kubenswrapper[4813]: I0217 08:44:52.572363 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56f75abd-b269-471e-bf17-66e5c0afb5dd-kube-api-access\") pod \"56f75abd-b269-471e-bf17-66e5c0afb5dd\" (UID: \"56f75abd-b269-471e-bf17-66e5c0afb5dd\") " Feb 17 08:44:52 crc kubenswrapper[4813]: I0217 08:44:52.572390 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56f75abd-b269-471e-bf17-66e5c0afb5dd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "56f75abd-b269-471e-bf17-66e5c0afb5dd" (UID: "56f75abd-b269-471e-bf17-66e5c0afb5dd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:44:52 crc kubenswrapper[4813]: I0217 08:44:52.572437 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56f75abd-b269-471e-bf17-66e5c0afb5dd-var-lock\") pod \"56f75abd-b269-471e-bf17-66e5c0afb5dd\" (UID: \"56f75abd-b269-471e-bf17-66e5c0afb5dd\") " Feb 17 08:44:52 crc kubenswrapper[4813]: I0217 08:44:52.572711 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56f75abd-b269-471e-bf17-66e5c0afb5dd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:52 crc kubenswrapper[4813]: I0217 08:44:52.572821 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56f75abd-b269-471e-bf17-66e5c0afb5dd-var-lock" (OuterVolumeSpecName: "var-lock") pod "56f75abd-b269-471e-bf17-66e5c0afb5dd" (UID: "56f75abd-b269-471e-bf17-66e5c0afb5dd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:44:52 crc kubenswrapper[4813]: I0217 08:44:52.579531 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f75abd-b269-471e-bf17-66e5c0afb5dd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "56f75abd-b269-471e-bf17-66e5c0afb5dd" (UID: "56f75abd-b269-471e-bf17-66e5c0afb5dd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:44:52 crc kubenswrapper[4813]: I0217 08:44:52.675799 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56f75abd-b269-471e-bf17-66e5c0afb5dd-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:52 crc kubenswrapper[4813]: I0217 08:44:52.675847 4813 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56f75abd-b269-471e-bf17-66e5c0afb5dd-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.006063 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.007807 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.008484 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.008810 4813 status_manager.go:851] "Failed to get status for pod" podUID="56f75abd-b269-471e-bf17-66e5c0afb5dd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.121052 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.121640 4813 status_manager.go:851] "Failed to get status for pod" podUID="56f75abd-b269-471e-bf17-66e5c0afb5dd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.181728 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.181775 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.181861 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.181870 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.181935 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.182024 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.182062 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"56f75abd-b269-471e-bf17-66e5c0afb5dd","Type":"ContainerDied","Data":"2fa714e3fcdcc6b76419714137ec5c65e3da4c9c5fe51285d58ee5195ffb9e93"} Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.182080 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.182102 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fa714e3fcdcc6b76419714137ec5c65e3da4c9c5fe51285d58ee5195ffb9e93" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.182291 4813 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.182393 4813 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.182417 4813 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.185599 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f" exitCode=0 Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.185693 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.185745 4813 scope.go:117] "RemoveContainer" containerID="09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.185876 4813 status_manager.go:851] "Failed to get status for pod" podUID="56f75abd-b269-471e-bf17-66e5c0afb5dd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:53 crc kubenswrapper[4813]: E0217 08:44:53.186343 4813 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.186545 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.187052 4813 status_manager.go:851] "Failed to get status for pod" podUID="56f75abd-b269-471e-bf17-66e5c0afb5dd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.199069 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.199577 4813 status_manager.go:851] "Failed to get status for pod" podUID="56f75abd-b269-471e-bf17-66e5c0afb5dd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.202106 4813 scope.go:117] "RemoveContainer" containerID="a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.215530 4813 scope.go:117] "RemoveContainer" containerID="e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.226115 4813 scope.go:117] "RemoveContainer" containerID="63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.236191 4813 scope.go:117] "RemoveContainer" containerID="8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.248730 4813 scope.go:117] "RemoveContainer" containerID="677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.280150 4813 scope.go:117] "RemoveContainer" containerID="09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393" Feb 17 08:44:53 crc kubenswrapper[4813]: E0217 08:44:53.280554 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\": container with ID starting with 09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393 not found: ID does not exist" containerID="09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.280588 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393"} err="failed to get container status \"09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\": rpc error: code = NotFound desc = could not find container \"09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393\": container with ID starting with 09bb3966e1c8cc4c236ea51d690fdcf3fb7cdd4bba6eda5fd3774fb7d9099393 not found: ID does not exist" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.280609 4813 scope.go:117] "RemoveContainer" containerID="a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc" Feb 17 08:44:53 crc kubenswrapper[4813]: E0217 08:44:53.280848 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\": container with ID starting with a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc not found: ID does not exist" containerID="a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.280874 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc"} err="failed to get container status \"a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\": rpc error: code = NotFound desc = could not find container \"a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc\": container with ID starting with a625163f6303a9ede3c616e16bb06fe6684467946be1df5dd945738cd3c456dc not found: ID does not exist" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.280890 4813 scope.go:117] "RemoveContainer" containerID="e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54" Feb 17 08:44:53 crc kubenswrapper[4813]: E0217 08:44:53.281201 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\": container with ID starting with e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54 not found: ID does not exist" containerID="e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.281224 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54"} err="failed to get container status \"e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\": rpc error: code = NotFound desc = could not find container \"e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54\": container with ID starting with e7bf0ee1d043d6765a463e916f631d298795702909afbb54f657498756cc6c54 not found: ID does not exist" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.281243 4813 scope.go:117] "RemoveContainer" containerID="63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c" Feb 17 08:44:53 crc kubenswrapper[4813]: E0217 08:44:53.281596 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\": container with ID starting with 63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c not found: ID does not exist" containerID="63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.281616 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c"} err="failed to get container status \"63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\": rpc error: code = NotFound desc = could not find container \"63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c\": container with ID starting with 63dcdc7110a8f0d67125ee0d4d650d1acce0aa04b877300245f7b4ae906e330c not found: ID does not exist" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.281628 4813 scope.go:117] "RemoveContainer" containerID="8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f" Feb 17 08:44:53 crc kubenswrapper[4813]: E0217 08:44:53.281866 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\": container with ID starting with 8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f not found: ID does not exist" containerID="8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.281882 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f"} err="failed to get container status \"8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\": rpc error: code = NotFound desc = could not find container \"8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f\": container with ID starting with 8361e92976a79a7d322daa37f254de3c03852f37c5ef3a30b122092e8a110d6f not found: ID does not exist" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.281894 4813 scope.go:117] "RemoveContainer" containerID="677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108" Feb 17 08:44:53 crc kubenswrapper[4813]: E0217 08:44:53.282396 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\": container with ID starting with 677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108 not found: ID does not exist" containerID="677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.282443 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108"} err="failed to get container status \"677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\": rpc error: code = NotFound desc = could not find container \"677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108\": container with ID starting with 677f969a7c8a2690c2a18fa5021fd6abd7eb8bbd6cba2ce5ef4385916dcc4108 not found: ID does not exist" Feb 17 08:44:53 crc kubenswrapper[4813]: E0217 08:44:53.475221 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:53 crc kubenswrapper[4813]: E0217 08:44:53.475704 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:53 crc kubenswrapper[4813]: E0217 08:44:53.476175 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:53 crc kubenswrapper[4813]: E0217 08:44:53.476458 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:53 crc kubenswrapper[4813]: E0217 08:44:53.476837 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:44:53 crc kubenswrapper[4813]: I0217 08:44:53.476870 4813 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 08:44:53 crc kubenswrapper[4813]: E0217 08:44:53.477252 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="200ms" Feb 17 08:44:53 crc kubenswrapper[4813]: E0217 08:44:53.677712 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="400ms" Feb 17 08:44:54 crc kubenswrapper[4813]: E0217 08:44:54.078480 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="800ms" Feb 17 08:44:54 crc kubenswrapper[4813]: E0217 08:44:54.880105 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="1.6s" Feb 17 08:44:55 crc kubenswrapper[4813]: I0217 08:44:55.118817 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 17 08:44:56 crc kubenswrapper[4813]: E0217 08:44:56.482010 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="3.2s" Feb 17 08:44:59 crc kubenswrapper[4813]: E0217 08:44:59.682916 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="6.4s" Feb 17 08:45:01 crc kubenswrapper[4813]: E0217 08:45:01.634987 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894fc3cb071cea7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 08:44:51.051933351 +0000 UTC m=+238.712694594,LastTimestamp:2026-02-17 08:44:51.051933351 +0000 UTC m=+238.712694594,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 08:45:03 crc kubenswrapper[4813]: I0217 08:45:03.110512 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:45:03 crc kubenswrapper[4813]: I0217 08:45:03.115715 4813 status_manager.go:851] "Failed to get status for pod" podUID="56f75abd-b269-471e-bf17-66e5c0afb5dd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:45:03 crc kubenswrapper[4813]: I0217 08:45:03.116676 4813 status_manager.go:851] "Failed to get status for pod" podUID="56f75abd-b269-471e-bf17-66e5c0afb5dd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:45:03 crc kubenswrapper[4813]: I0217 08:45:03.139806 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7fdf09f7-638a-4436-ad1d-f8afe2855536" Feb 17 08:45:03 crc kubenswrapper[4813]: I0217 08:45:03.139849 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7fdf09f7-638a-4436-ad1d-f8afe2855536" Feb 17 08:45:03 crc kubenswrapper[4813]: E0217 08:45:03.140398 4813 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:45:03 crc kubenswrapper[4813]: I0217 08:45:03.141033 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:45:03 crc kubenswrapper[4813]: W0217 08:45:03.161942 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-8bcc6216f1e7523e415cee3488977523187d95b0e7c03bcd3ebfae1e4c5cd7d7 WatchSource:0}: Error finding container 8bcc6216f1e7523e415cee3488977523187d95b0e7c03bcd3ebfae1e4c5cd7d7: Status 404 returned error can't find the container with id 8bcc6216f1e7523e415cee3488977523187d95b0e7c03bcd3ebfae1e4c5cd7d7 Feb 17 08:45:03 crc kubenswrapper[4813]: I0217 08:45:03.243350 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 08:45:03 crc kubenswrapper[4813]: I0217 08:45:03.243465 4813 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e" exitCode=1 Feb 17 08:45:03 crc kubenswrapper[4813]: I0217 08:45:03.243586 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e"} Feb 17 08:45:03 crc kubenswrapper[4813]: I0217 08:45:03.244465 4813 scope.go:117] "RemoveContainer" containerID="fab462285f25ec3462bcbbc76ef68cc0d6e3c1c9c042512dff9a8c68a743e71e" Feb 17 08:45:03 crc kubenswrapper[4813]: I0217 08:45:03.244646 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:45:03 crc kubenswrapper[4813]: I0217 08:45:03.245190 4813 status_manager.go:851] "Failed to get status for pod" podUID="56f75abd-b269-471e-bf17-66e5c0afb5dd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:45:03 crc kubenswrapper[4813]: I0217 08:45:03.248203 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8bcc6216f1e7523e415cee3488977523187d95b0e7c03bcd3ebfae1e4c5cd7d7"} Feb 17 08:45:04 crc kubenswrapper[4813]: I0217 08:45:04.121920 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:45:04 crc kubenswrapper[4813]: I0217 08:45:04.257111 4813 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f734054778b581825d3d4796785e3dc1c43c80fe0f2940e397b8b9f0aedb46b7" exitCode=0 Feb 17 08:45:04 crc kubenswrapper[4813]: I0217 08:45:04.257200 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f734054778b581825d3d4796785e3dc1c43c80fe0f2940e397b8b9f0aedb46b7"} Feb 17 08:45:04 crc kubenswrapper[4813]: I0217 08:45:04.257585 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7fdf09f7-638a-4436-ad1d-f8afe2855536" Feb 17 08:45:04 crc kubenswrapper[4813]: I0217 08:45:04.258030 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7fdf09f7-638a-4436-ad1d-f8afe2855536" Feb 17 08:45:04 crc kubenswrapper[4813]: I0217 08:45:04.258221 4813 status_manager.go:851] "Failed to get status for pod" podUID="56f75abd-b269-471e-bf17-66e5c0afb5dd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:45:04 crc kubenswrapper[4813]: I0217 08:45:04.258980 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:45:04 crc kubenswrapper[4813]: E0217 08:45:04.258987 4813 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:45:04 crc kubenswrapper[4813]: I0217 08:45:04.263152 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 08:45:04 crc kubenswrapper[4813]: I0217 08:45:04.263443 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ef5888c9488ee1c7fb45ae66e01c7ad7608309afb1d49710c0401df035e2ad1c"} Feb 17 08:45:04 crc kubenswrapper[4813]: I0217 08:45:04.264253 4813 status_manager.go:851] "Failed to get status for pod" podUID="56f75abd-b269-471e-bf17-66e5c0afb5dd" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:45:04 crc kubenswrapper[4813]: I0217 08:45:04.264751 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 17 08:45:05 crc kubenswrapper[4813]: I0217 08:45:05.274776 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"db23f79be56c33cbee06e939300ebd689cf9b771f1e3837848f40f9c2e1e85ce"} Feb 17 08:45:05 crc kubenswrapper[4813]: I0217 08:45:05.275095 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f80a83451c5bbb753f3a4dc7c11d6ec14a96bc58d58688ae254a76bccdeae7ef"} Feb 17 08:45:05 crc kubenswrapper[4813]: I0217 08:45:05.275117 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a13f6d0c081e85e7550087d296e816ea4612e13819b4232dabc2b0d15242fac6"} Feb 17 08:45:06 crc kubenswrapper[4813]: I0217 08:45:06.285693 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"66c5c667cb6ab42e60a51b9f1c2375254c7818c7089a4859a32d8c1a066add1f"} Feb 17 08:45:06 crc kubenswrapper[4813]: I0217 08:45:06.285738 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7b72a3c357368a7eeb7a75c5652fe4d137f5af5c6b624984b50c1a1a0ddb0f2b"} Feb 17 08:45:06 crc kubenswrapper[4813]: I0217 08:45:06.285862 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:45:06 crc kubenswrapper[4813]: I0217 08:45:06.285989 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7fdf09f7-638a-4436-ad1d-f8afe2855536" Feb 17 08:45:06 crc kubenswrapper[4813]: I0217 08:45:06.286018 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7fdf09f7-638a-4436-ad1d-f8afe2855536" Feb 17 08:45:08 crc kubenswrapper[4813]: I0217 08:45:08.141721 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:45:08 crc kubenswrapper[4813]: I0217 08:45:08.142231 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:45:08 crc kubenswrapper[4813]: I0217 08:45:08.151172 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:45:10 crc kubenswrapper[4813]: I0217 08:45:10.277122 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:45:11 crc kubenswrapper[4813]: I0217 08:45:11.333196 4813 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:45:12 crc kubenswrapper[4813]: I0217 08:45:12.325904 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7fdf09f7-638a-4436-ad1d-f8afe2855536" Feb 17 08:45:12 crc kubenswrapper[4813]: I0217 08:45:12.326301 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7fdf09f7-638a-4436-ad1d-f8afe2855536" Feb 17 08:45:12 crc kubenswrapper[4813]: I0217 08:45:12.334817 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:45:13 crc kubenswrapper[4813]: I0217 08:45:13.132802 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="49bafb0b-7eb9-4751-bbd6-ed7e88544e67" Feb 17 08:45:13 crc kubenswrapper[4813]: I0217 08:45:13.331356 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7fdf09f7-638a-4436-ad1d-f8afe2855536" Feb 17 08:45:13 crc kubenswrapper[4813]: I0217 08:45:13.331434 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7fdf09f7-638a-4436-ad1d-f8afe2855536" Feb 17 08:45:13 crc kubenswrapper[4813]: I0217 08:45:13.334794 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="49bafb0b-7eb9-4751-bbd6-ed7e88544e67" Feb 17 08:45:14 crc kubenswrapper[4813]: I0217 08:45:14.122035 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:45:14 crc kubenswrapper[4813]: I0217 08:45:14.122229 4813 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 17 08:45:14 crc kubenswrapper[4813]: I0217 08:45:14.122296 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 17 08:45:21 crc kubenswrapper[4813]: I0217 08:45:21.259296 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 08:45:21 crc kubenswrapper[4813]: I0217 08:45:21.303740 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 08:45:21 crc kubenswrapper[4813]: I0217 08:45:21.710369 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 08:45:21 crc kubenswrapper[4813]: I0217 08:45:21.881839 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 08:45:22 crc kubenswrapper[4813]: I0217 08:45:22.037220 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 08:45:22 crc kubenswrapper[4813]: I0217 08:45:22.685494 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 08:45:22 crc kubenswrapper[4813]: I0217 08:45:22.734667 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 08:45:22 crc kubenswrapper[4813]: I0217 08:45:22.833018 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 08:45:23 crc kubenswrapper[4813]: I0217 08:45:23.154271 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 08:45:23 crc kubenswrapper[4813]: I0217 08:45:23.257210 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 08:45:23 crc kubenswrapper[4813]: I0217 08:45:23.313608 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 08:45:23 crc kubenswrapper[4813]: I0217 08:45:23.335493 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 08:45:23 crc kubenswrapper[4813]: I0217 08:45:23.652113 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 08:45:23 crc kubenswrapper[4813]: I0217 08:45:23.948967 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 08:45:23 crc kubenswrapper[4813]: I0217 08:45:23.976347 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 08:45:23 crc kubenswrapper[4813]: I0217 08:45:23.997173 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 08:45:24 crc kubenswrapper[4813]: I0217 08:45:24.006439 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 08:45:24 crc kubenswrapper[4813]: I0217 08:45:24.128712 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:45:24 crc kubenswrapper[4813]: I0217 08:45:24.141268 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 08:45:24 crc kubenswrapper[4813]: I0217 08:45:24.243873 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 08:45:24 crc kubenswrapper[4813]: I0217 08:45:24.245876 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 08:45:24 crc kubenswrapper[4813]: I0217 08:45:24.483679 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 08:45:24 crc kubenswrapper[4813]: I0217 08:45:24.627468 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 08:45:24 crc kubenswrapper[4813]: I0217 08:45:24.750503 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 08:45:24 crc kubenswrapper[4813]: I0217 08:45:24.780686 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 08:45:24 crc kubenswrapper[4813]: I0217 08:45:24.831045 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 08:45:24 crc kubenswrapper[4813]: I0217 08:45:24.840475 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 08:45:24 crc kubenswrapper[4813]: I0217 08:45:24.869683 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 08:45:24 crc kubenswrapper[4813]: I0217 08:45:24.874851 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 08:45:24 crc kubenswrapper[4813]: I0217 08:45:24.930333 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 08:45:24 crc kubenswrapper[4813]: I0217 08:45:24.948884 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.007410 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.100797 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.152886 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.157558 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.188607 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.235249 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.327193 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.357220 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.453194 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.473144 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.491380 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.676044 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.698819 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.870866 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.885797 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.928386 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.930442 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 08:45:25 crc kubenswrapper[4813]: I0217 08:45:25.984704 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 08:45:26 crc kubenswrapper[4813]: I0217 08:45:26.057257 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 08:45:26 crc kubenswrapper[4813]: I0217 08:45:26.110649 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 08:45:26 crc kubenswrapper[4813]: I0217 08:45:26.148369 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 08:45:26 crc kubenswrapper[4813]: I0217 08:45:26.206915 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 08:45:26 crc kubenswrapper[4813]: I0217 08:45:26.230238 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 08:45:26 crc kubenswrapper[4813]: I0217 08:45:26.319017 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 08:45:26 crc kubenswrapper[4813]: I0217 08:45:26.520581 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 08:45:26 crc kubenswrapper[4813]: I0217 08:45:26.558499 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 08:45:26 crc kubenswrapper[4813]: I0217 08:45:26.590973 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 08:45:26 crc kubenswrapper[4813]: I0217 08:45:26.707864 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 08:45:26 crc kubenswrapper[4813]: I0217 08:45:26.838172 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 08:45:26 crc kubenswrapper[4813]: I0217 08:45:26.881000 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 08:45:26 crc kubenswrapper[4813]: I0217 08:45:26.895103 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 08:45:26 crc kubenswrapper[4813]: I0217 08:45:26.988399 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.057244 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.096454 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.143526 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.186265 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.243770 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.249834 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.256517 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.334396 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.345539 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.376570 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.405493 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.418993 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.528286 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.573085 4813 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.625541 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.670741 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.672807 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.753163 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.812960 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.925813 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.997758 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 08:45:27 crc kubenswrapper[4813]: I0217 08:45:27.997830 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.005565 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.183196 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.200354 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.257962 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.316829 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.346394 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.409590 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.419920 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.420404 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.470084 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.503285 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.621155 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.634755 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.646041 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.684721 4813 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.701151 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.706802 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.710815 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.744086 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.789374 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.792703 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.855549 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 08:45:28 crc kubenswrapper[4813]: I0217 08:45:28.926870 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.048448 4813 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.062169 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.104878 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.120240 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.223101 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.278834 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.306760 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.324828 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.373992 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.402939 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.411936 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.418332 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.460277 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.461827 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.464506 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.559342 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.578329 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.641434 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.698913 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.719221 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.730870 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.896256 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.918381 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.965518 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 08:45:29 crc kubenswrapper[4813]: I0217 08:45:29.974095 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 08:45:30 crc kubenswrapper[4813]: I0217 08:45:30.015215 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 08:45:30 crc kubenswrapper[4813]: I0217 08:45:30.018966 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 08:45:30 crc kubenswrapper[4813]: I0217 08:45:30.152128 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 08:45:30 crc kubenswrapper[4813]: I0217 08:45:30.178492 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 08:45:30 crc kubenswrapper[4813]: I0217 08:45:30.187354 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 08:45:30 crc kubenswrapper[4813]: I0217 08:45:30.236406 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 08:45:30 crc kubenswrapper[4813]: I0217 08:45:30.238111 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 08:45:30 crc kubenswrapper[4813]: I0217 08:45:30.281817 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 08:45:30 crc kubenswrapper[4813]: I0217 08:45:30.314281 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 08:45:30 crc kubenswrapper[4813]: I0217 08:45:30.373210 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 08:45:30 crc kubenswrapper[4813]: I0217 08:45:30.473271 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 08:45:30 crc kubenswrapper[4813]: I0217 08:45:30.615413 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 08:45:30 crc kubenswrapper[4813]: I0217 08:45:30.777942 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 08:45:30 crc kubenswrapper[4813]: I0217 08:45:30.831356 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.055439 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.094832 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.139480 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.228791 4813 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.305531 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.306394 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.318377 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.366219 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.381190 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.411795 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.468975 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.563665 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.599829 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.678371 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.715108 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.805714 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.848693 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.887222 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.900531 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.953785 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 08:45:31 crc kubenswrapper[4813]: I0217 08:45:31.975440 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 08:45:32 crc kubenswrapper[4813]: I0217 08:45:32.153810 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 08:45:32 crc kubenswrapper[4813]: I0217 08:45:32.263068 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 08:45:32 crc kubenswrapper[4813]: I0217 08:45:32.301573 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 08:45:32 crc kubenswrapper[4813]: I0217 08:45:32.481612 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 08:45:32 crc kubenswrapper[4813]: I0217 08:45:32.512900 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 08:45:32 crc kubenswrapper[4813]: I0217 08:45:32.544379 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 08:45:32 crc kubenswrapper[4813]: I0217 08:45:32.619836 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 08:45:32 crc kubenswrapper[4813]: I0217 08:45:32.696699 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 08:45:32 crc kubenswrapper[4813]: I0217 08:45:32.832614 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 08:45:32 crc kubenswrapper[4813]: I0217 08:45:32.835725 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 08:45:32 crc kubenswrapper[4813]: I0217 08:45:32.948871 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 08:45:32 crc kubenswrapper[4813]: I0217 08:45:32.978387 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 08:45:33 crc kubenswrapper[4813]: I0217 08:45:33.042727 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 08:45:33 crc kubenswrapper[4813]: I0217 08:45:33.075236 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 08:45:33 crc kubenswrapper[4813]: I0217 08:45:33.224512 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 08:45:33 crc kubenswrapper[4813]: I0217 08:45:33.316775 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 08:45:33 crc kubenswrapper[4813]: I0217 08:45:33.371802 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 08:45:33 crc kubenswrapper[4813]: I0217 08:45:33.449344 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 08:45:33 crc kubenswrapper[4813]: I0217 08:45:33.559044 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 08:45:33 crc kubenswrapper[4813]: I0217 08:45:33.657821 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 08:45:33 crc kubenswrapper[4813]: I0217 08:45:33.699906 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 08:45:33 crc kubenswrapper[4813]: I0217 08:45:33.802990 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 08:45:33 crc kubenswrapper[4813]: I0217 08:45:33.815165 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 08:45:33 crc kubenswrapper[4813]: I0217 08:45:33.856828 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 08:45:33 crc kubenswrapper[4813]: I0217 08:45:33.872011 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 08:45:33 crc kubenswrapper[4813]: I0217 08:45:33.963238 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.054871 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.118977 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.146178 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.280620 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.346075 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.353491 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.364888 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.393789 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.433354 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.433362 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.444028 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.511763 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.514557 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.531189 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.557842 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.586084 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.610475 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.657049 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.710973 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.716130 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.816003 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 08:45:34 crc kubenswrapper[4813]: I0217 08:45:34.938751 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.057613 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.071865 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.112593 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.127000 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.136779 4813 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.137167 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.140404 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.140455 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.147718 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.164368 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.164346071 podStartE2EDuration="24.164346071s" podCreationTimestamp="2026-02-17 08:45:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:45:35.161703579 +0000 UTC m=+282.822464802" watchObservedRunningTime="2026-02-17 08:45:35.164346071 +0000 UTC m=+282.825107314" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.230693 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.286290 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.371247 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.457215 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.500908 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.771678 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.827773 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.838438 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.900957 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.925217 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.942492 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 08:45:35 crc kubenswrapper[4813]: I0217 08:45:35.992578 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.082503 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.125757 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.162032 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.257390 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.261444 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.308240 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.333177 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.451203 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.556283 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.683094 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wtt9"] Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.683387 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4wtt9" podUID="0d0624db-755c-4a56-afd4-02eeb8b8b1db" containerName="registry-server" containerID="cri-o://d037ce5c8afd370190c173ec98769ac0e4da95899b207f0578fbfd1dc4d38b89" gracePeriod=30 Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.698241 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xxj57"] Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.698735 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xxj57" podUID="064c46bd-0e88-4dca-9a42-923b3eae48a1" containerName="registry-server" containerID="cri-o://1662064a66d8580fa819f268f1a2ce9ef5b5c9d2640eb653249ffae22c043fd6" gracePeriod=30 Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.722427 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-whhpn"] Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.723397 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" podUID="37375f80-f004-4621-b863-326c6e296435" containerName="marketplace-operator" containerID="cri-o://bc2fa73bc2d866e1799e9cd3706957babe9eccf0a9ed26dabda33a1c640d0a31" gracePeriod=30 Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.725213 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.737969 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gsnb"] Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.738394 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4gsnb" podUID="2d05c6fb-46ad-4722-b20f-c42bba042431" containerName="registry-server" containerID="cri-o://ad4fdf53a758baaa50bd5c737693ab65c95a85d5e157efc5e3549e7c1d3dc9c4" gracePeriod=30 Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.749898 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rmz42"] Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.750944 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rmz42" podUID="8e827307-21c0-4712-ab98-e95d277f4201" containerName="registry-server" containerID="cri-o://b6d54696fca16c0cacedd62121ef45d55b1ef45dc2f8383dae888c42a867f7be" gracePeriod=30 Feb 17 08:45:36 crc kubenswrapper[4813]: I0217 08:45:36.859155 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.104473 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.105114 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.158710 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.165402 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.170209 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.228832 4813 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.229571 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0624db-755c-4a56-afd4-02eeb8b8b1db-catalog-content\") pod \"0d0624db-755c-4a56-afd4-02eeb8b8b1db\" (UID: \"0d0624db-755c-4a56-afd4-02eeb8b8b1db\") " Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.229781 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8psm\" (UniqueName: \"kubernetes.io/projected/0d0624db-755c-4a56-afd4-02eeb8b8b1db-kube-api-access-j8psm\") pod \"0d0624db-755c-4a56-afd4-02eeb8b8b1db\" (UID: \"0d0624db-755c-4a56-afd4-02eeb8b8b1db\") " Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.229913 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsvpn\" (UniqueName: \"kubernetes.io/projected/064c46bd-0e88-4dca-9a42-923b3eae48a1-kube-api-access-tsvpn\") pod \"064c46bd-0e88-4dca-9a42-923b3eae48a1\" (UID: \"064c46bd-0e88-4dca-9a42-923b3eae48a1\") " Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.230965 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/064c46bd-0e88-4dca-9a42-923b3eae48a1-utilities\") pod \"064c46bd-0e88-4dca-9a42-923b3eae48a1\" (UID: \"064c46bd-0e88-4dca-9a42-923b3eae48a1\") " Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.231001 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/064c46bd-0e88-4dca-9a42-923b3eae48a1-catalog-content\") pod \"064c46bd-0e88-4dca-9a42-923b3eae48a1\" (UID: \"064c46bd-0e88-4dca-9a42-923b3eae48a1\") " Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.231059 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0624db-755c-4a56-afd4-02eeb8b8b1db-utilities\") pod \"0d0624db-755c-4a56-afd4-02eeb8b8b1db\" (UID: \"0d0624db-755c-4a56-afd4-02eeb8b8b1db\") " Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.231646 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/064c46bd-0e88-4dca-9a42-923b3eae48a1-utilities" (OuterVolumeSpecName: "utilities") pod "064c46bd-0e88-4dca-9a42-923b3eae48a1" (UID: "064c46bd-0e88-4dca-9a42-923b3eae48a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.232040 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d0624db-755c-4a56-afd4-02eeb8b8b1db-utilities" (OuterVolumeSpecName: "utilities") pod "0d0624db-755c-4a56-afd4-02eeb8b8b1db" (UID: "0d0624db-755c-4a56-afd4-02eeb8b8b1db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.236548 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.236668 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064c46bd-0e88-4dca-9a42-923b3eae48a1-kube-api-access-tsvpn" (OuterVolumeSpecName: "kube-api-access-tsvpn") pod "064c46bd-0e88-4dca-9a42-923b3eae48a1" (UID: "064c46bd-0e88-4dca-9a42-923b3eae48a1"). InnerVolumeSpecName "kube-api-access-tsvpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.236868 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d0624db-755c-4a56-afd4-02eeb8b8b1db-kube-api-access-j8psm" (OuterVolumeSpecName: "kube-api-access-j8psm") pod "0d0624db-755c-4a56-afd4-02eeb8b8b1db" (UID: "0d0624db-755c-4a56-afd4-02eeb8b8b1db"). InnerVolumeSpecName "kube-api-access-j8psm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.292983 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d0624db-755c-4a56-afd4-02eeb8b8b1db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d0624db-755c-4a56-afd4-02eeb8b8b1db" (UID: "0d0624db-755c-4a56-afd4-02eeb8b8b1db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.294239 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/064c46bd-0e88-4dca-9a42-923b3eae48a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "064c46bd-0e88-4dca-9a42-923b3eae48a1" (UID: "064c46bd-0e88-4dca-9a42-923b3eae48a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.332047 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e827307-21c0-4712-ab98-e95d277f4201-catalog-content\") pod \"8e827307-21c0-4712-ab98-e95d277f4201\" (UID: \"8e827307-21c0-4712-ab98-e95d277f4201\") " Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.332124 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff6zk\" (UniqueName: \"kubernetes.io/projected/37375f80-f004-4621-b863-326c6e296435-kube-api-access-ff6zk\") pod \"37375f80-f004-4621-b863-326c6e296435\" (UID: \"37375f80-f004-4621-b863-326c6e296435\") " Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.332174 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42jvh\" (UniqueName: \"kubernetes.io/projected/2d05c6fb-46ad-4722-b20f-c42bba042431-kube-api-access-42jvh\") pod \"2d05c6fb-46ad-4722-b20f-c42bba042431\" (UID: \"2d05c6fb-46ad-4722-b20f-c42bba042431\") " Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.332234 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37375f80-f004-4621-b863-326c6e296435-marketplace-trusted-ca\") pod \"37375f80-f004-4621-b863-326c6e296435\" (UID: \"37375f80-f004-4621-b863-326c6e296435\") " Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.332277 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37375f80-f004-4621-b863-326c6e296435-marketplace-operator-metrics\") pod \"37375f80-f004-4621-b863-326c6e296435\" (UID: \"37375f80-f004-4621-b863-326c6e296435\") " Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.332328 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e827307-21c0-4712-ab98-e95d277f4201-utilities\") pod \"8e827307-21c0-4712-ab98-e95d277f4201\" (UID: \"8e827307-21c0-4712-ab98-e95d277f4201\") " Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.332360 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d05c6fb-46ad-4722-b20f-c42bba042431-catalog-content\") pod \"2d05c6fb-46ad-4722-b20f-c42bba042431\" (UID: \"2d05c6fb-46ad-4722-b20f-c42bba042431\") " Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.332385 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz9dc\" (UniqueName: \"kubernetes.io/projected/8e827307-21c0-4712-ab98-e95d277f4201-kube-api-access-jz9dc\") pod \"8e827307-21c0-4712-ab98-e95d277f4201\" (UID: \"8e827307-21c0-4712-ab98-e95d277f4201\") " Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.332412 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d05c6fb-46ad-4722-b20f-c42bba042431-utilities\") pod \"2d05c6fb-46ad-4722-b20f-c42bba042431\" (UID: \"2d05c6fb-46ad-4722-b20f-c42bba042431\") " Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.332644 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d0624db-755c-4a56-afd4-02eeb8b8b1db-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.332659 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8psm\" (UniqueName: \"kubernetes.io/projected/0d0624db-755c-4a56-afd4-02eeb8b8b1db-kube-api-access-j8psm\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.332675 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsvpn\" (UniqueName: \"kubernetes.io/projected/064c46bd-0e88-4dca-9a42-923b3eae48a1-kube-api-access-tsvpn\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.332688 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/064c46bd-0e88-4dca-9a42-923b3eae48a1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.332698 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/064c46bd-0e88-4dca-9a42-923b3eae48a1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.332709 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d0624db-755c-4a56-afd4-02eeb8b8b1db-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.333523 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d05c6fb-46ad-4722-b20f-c42bba042431-utilities" (OuterVolumeSpecName: "utilities") pod "2d05c6fb-46ad-4722-b20f-c42bba042431" (UID: "2d05c6fb-46ad-4722-b20f-c42bba042431"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.334757 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37375f80-f004-4621-b863-326c6e296435-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "37375f80-f004-4621-b863-326c6e296435" (UID: "37375f80-f004-4621-b863-326c6e296435"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.335651 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e827307-21c0-4712-ab98-e95d277f4201-utilities" (OuterVolumeSpecName: "utilities") pod "8e827307-21c0-4712-ab98-e95d277f4201" (UID: "8e827307-21c0-4712-ab98-e95d277f4201"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.336800 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37375f80-f004-4621-b863-326c6e296435-kube-api-access-ff6zk" (OuterVolumeSpecName: "kube-api-access-ff6zk") pod "37375f80-f004-4621-b863-326c6e296435" (UID: "37375f80-f004-4621-b863-326c6e296435"). InnerVolumeSpecName "kube-api-access-ff6zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.337226 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37375f80-f004-4621-b863-326c6e296435-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "37375f80-f004-4621-b863-326c6e296435" (UID: "37375f80-f004-4621-b863-326c6e296435"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.338656 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d05c6fb-46ad-4722-b20f-c42bba042431-kube-api-access-42jvh" (OuterVolumeSpecName: "kube-api-access-42jvh") pod "2d05c6fb-46ad-4722-b20f-c42bba042431" (UID: "2d05c6fb-46ad-4722-b20f-c42bba042431"). InnerVolumeSpecName "kube-api-access-42jvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.339770 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e827307-21c0-4712-ab98-e95d277f4201-kube-api-access-jz9dc" (OuterVolumeSpecName: "kube-api-access-jz9dc") pod "8e827307-21c0-4712-ab98-e95d277f4201" (UID: "8e827307-21c0-4712-ab98-e95d277f4201"). InnerVolumeSpecName "kube-api-access-jz9dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.367707 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d05c6fb-46ad-4722-b20f-c42bba042431-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d05c6fb-46ad-4722-b20f-c42bba042431" (UID: "2d05c6fb-46ad-4722-b20f-c42bba042431"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.385356 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.433680 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff6zk\" (UniqueName: \"kubernetes.io/projected/37375f80-f004-4621-b863-326c6e296435-kube-api-access-ff6zk\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.433745 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37375f80-f004-4621-b863-326c6e296435-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.433773 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42jvh\" (UniqueName: \"kubernetes.io/projected/2d05c6fb-46ad-4722-b20f-c42bba042431-kube-api-access-42jvh\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.433799 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37375f80-f004-4621-b863-326c6e296435-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.433821 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e827307-21c0-4712-ab98-e95d277f4201-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.433844 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d05c6fb-46ad-4722-b20f-c42bba042431-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.433867 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz9dc\" (UniqueName: \"kubernetes.io/projected/8e827307-21c0-4712-ab98-e95d277f4201-kube-api-access-jz9dc\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.433892 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d05c6fb-46ad-4722-b20f-c42bba042431-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.494705 4813 generic.go:334] "Generic (PLEG): container finished" podID="8e827307-21c0-4712-ab98-e95d277f4201" containerID="b6d54696fca16c0cacedd62121ef45d55b1ef45dc2f8383dae888c42a867f7be" exitCode=0 Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.494785 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmz42" event={"ID":"8e827307-21c0-4712-ab98-e95d277f4201","Type":"ContainerDied","Data":"b6d54696fca16c0cacedd62121ef45d55b1ef45dc2f8383dae888c42a867f7be"} Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.494817 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmz42" event={"ID":"8e827307-21c0-4712-ab98-e95d277f4201","Type":"ContainerDied","Data":"9fca3f85be2114049942f5bed216d37fd90c06dd503e8a25c40cbfe16320d359"} Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.494810 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmz42" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.494837 4813 scope.go:117] "RemoveContainer" containerID="b6d54696fca16c0cacedd62121ef45d55b1ef45dc2f8383dae888c42a867f7be" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.502096 4813 generic.go:334] "Generic (PLEG): container finished" podID="064c46bd-0e88-4dca-9a42-923b3eae48a1" containerID="1662064a66d8580fa819f268f1a2ce9ef5b5c9d2640eb653249ffae22c043fd6" exitCode=0 Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.502190 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxj57" event={"ID":"064c46bd-0e88-4dca-9a42-923b3eae48a1","Type":"ContainerDied","Data":"1662064a66d8580fa819f268f1a2ce9ef5b5c9d2640eb653249ffae22c043fd6"} Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.502208 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xxj57" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.502236 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxj57" event={"ID":"064c46bd-0e88-4dca-9a42-923b3eae48a1","Type":"ContainerDied","Data":"9466a489ddeb45e261233f7d1b5a97a8f620a8ff3d454ad8a23d7e2ea40b42d2"} Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.506436 4813 generic.go:334] "Generic (PLEG): container finished" podID="0d0624db-755c-4a56-afd4-02eeb8b8b1db" containerID="d037ce5c8afd370190c173ec98769ac0e4da95899b207f0578fbfd1dc4d38b89" exitCode=0 Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.506511 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wtt9" event={"ID":"0d0624db-755c-4a56-afd4-02eeb8b8b1db","Type":"ContainerDied","Data":"d037ce5c8afd370190c173ec98769ac0e4da95899b207f0578fbfd1dc4d38b89"} Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.506538 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wtt9" event={"ID":"0d0624db-755c-4a56-afd4-02eeb8b8b1db","Type":"ContainerDied","Data":"a7f92c038175d0c5cef83f1070edde485350d1f5e55f3dfed1de97a8a5158851"} Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.506636 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wtt9" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.510049 4813 generic.go:334] "Generic (PLEG): container finished" podID="37375f80-f004-4621-b863-326c6e296435" containerID="bc2fa73bc2d866e1799e9cd3706957babe9eccf0a9ed26dabda33a1c640d0a31" exitCode=0 Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.510178 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.510682 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" event={"ID":"37375f80-f004-4621-b863-326c6e296435","Type":"ContainerDied","Data":"bc2fa73bc2d866e1799e9cd3706957babe9eccf0a9ed26dabda33a1c640d0a31"} Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.510721 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-whhpn" event={"ID":"37375f80-f004-4621-b863-326c6e296435","Type":"ContainerDied","Data":"14e676424034ce3e41ee7477e9b93708861f62b2f0a52b6953d071dc7c6acf50"} Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.513699 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gsnb" event={"ID":"2d05c6fb-46ad-4722-b20f-c42bba042431","Type":"ContainerDied","Data":"ad4fdf53a758baaa50bd5c737693ab65c95a85d5e157efc5e3549e7c1d3dc9c4"} Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.513731 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4gsnb" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.513621 4813 generic.go:334] "Generic (PLEG): container finished" podID="2d05c6fb-46ad-4722-b20f-c42bba042431" containerID="ad4fdf53a758baaa50bd5c737693ab65c95a85d5e157efc5e3549e7c1d3dc9c4" exitCode=0 Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.519145 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gsnb" event={"ID":"2d05c6fb-46ad-4722-b20f-c42bba042431","Type":"ContainerDied","Data":"1b70a931587f26b9b904b3e14f146f1f8b54c16ab51edec769931754bc7ebf6f"} Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.519632 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.525073 4813 scope.go:117] "RemoveContainer" containerID="d6db625f8c74eaae1978f56183b9212e1e63f473a3de392ff0ecc21bd0d3f5d9" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.554287 4813 scope.go:117] "RemoveContainer" containerID="344e85d426a04b2e57c1c865eb4b3afa875066f57bb1be7d44cdcfb0e486e69f" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.556400 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.559107 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-whhpn"] Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.566094 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-whhpn"] Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.566387 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e827307-21c0-4712-ab98-e95d277f4201-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e827307-21c0-4712-ab98-e95d277f4201" (UID: "8e827307-21c0-4712-ab98-e95d277f4201"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.588775 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wtt9"] Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.594782 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4wtt9"] Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.597609 4813 scope.go:117] "RemoveContainer" containerID="b6d54696fca16c0cacedd62121ef45d55b1ef45dc2f8383dae888c42a867f7be" Feb 17 08:45:37 crc kubenswrapper[4813]: E0217 08:45:37.607462 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6d54696fca16c0cacedd62121ef45d55b1ef45dc2f8383dae888c42a867f7be\": container with ID starting with b6d54696fca16c0cacedd62121ef45d55b1ef45dc2f8383dae888c42a867f7be not found: ID does not exist" containerID="b6d54696fca16c0cacedd62121ef45d55b1ef45dc2f8383dae888c42a867f7be" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.607514 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6d54696fca16c0cacedd62121ef45d55b1ef45dc2f8383dae888c42a867f7be"} err="failed to get container status \"b6d54696fca16c0cacedd62121ef45d55b1ef45dc2f8383dae888c42a867f7be\": rpc error: code = NotFound desc = could not find container \"b6d54696fca16c0cacedd62121ef45d55b1ef45dc2f8383dae888c42a867f7be\": container with ID starting with b6d54696fca16c0cacedd62121ef45d55b1ef45dc2f8383dae888c42a867f7be not found: ID does not exist" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.607542 4813 scope.go:117] "RemoveContainer" containerID="d6db625f8c74eaae1978f56183b9212e1e63f473a3de392ff0ecc21bd0d3f5d9" Feb 17 08:45:37 crc kubenswrapper[4813]: E0217 08:45:37.610864 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6db625f8c74eaae1978f56183b9212e1e63f473a3de392ff0ecc21bd0d3f5d9\": container with ID starting with d6db625f8c74eaae1978f56183b9212e1e63f473a3de392ff0ecc21bd0d3f5d9 not found: ID does not exist" containerID="d6db625f8c74eaae1978f56183b9212e1e63f473a3de392ff0ecc21bd0d3f5d9" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.611298 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6db625f8c74eaae1978f56183b9212e1e63f473a3de392ff0ecc21bd0d3f5d9"} err="failed to get container status \"d6db625f8c74eaae1978f56183b9212e1e63f473a3de392ff0ecc21bd0d3f5d9\": rpc error: code = NotFound desc = could not find container \"d6db625f8c74eaae1978f56183b9212e1e63f473a3de392ff0ecc21bd0d3f5d9\": container with ID starting with d6db625f8c74eaae1978f56183b9212e1e63f473a3de392ff0ecc21bd0d3f5d9 not found: ID does not exist" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.611540 4813 scope.go:117] "RemoveContainer" containerID="344e85d426a04b2e57c1c865eb4b3afa875066f57bb1be7d44cdcfb0e486e69f" Feb 17 08:45:37 crc kubenswrapper[4813]: E0217 08:45:37.616659 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"344e85d426a04b2e57c1c865eb4b3afa875066f57bb1be7d44cdcfb0e486e69f\": container with ID starting with 344e85d426a04b2e57c1c865eb4b3afa875066f57bb1be7d44cdcfb0e486e69f not found: ID does not exist" containerID="344e85d426a04b2e57c1c865eb4b3afa875066f57bb1be7d44cdcfb0e486e69f" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.617068 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344e85d426a04b2e57c1c865eb4b3afa875066f57bb1be7d44cdcfb0e486e69f"} err="failed to get container status \"344e85d426a04b2e57c1c865eb4b3afa875066f57bb1be7d44cdcfb0e486e69f\": rpc error: code = NotFound desc = could not find container \"344e85d426a04b2e57c1c865eb4b3afa875066f57bb1be7d44cdcfb0e486e69f\": container with ID starting with 344e85d426a04b2e57c1c865eb4b3afa875066f57bb1be7d44cdcfb0e486e69f not found: ID does not exist" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.617252 4813 scope.go:117] "RemoveContainer" containerID="1662064a66d8580fa819f268f1a2ce9ef5b5c9d2640eb653249ffae22c043fd6" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.622999 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gsnb"] Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.627538 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gsnb"] Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.632405 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xxj57"] Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.636479 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e827307-21c0-4712-ab98-e95d277f4201-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.637055 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xxj57"] Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.640876 4813 scope.go:117] "RemoveContainer" containerID="2116145942fec1752312a92527c4d74f372ee8ecc5a3e25fb70c753717d5e013" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.659483 4813 scope.go:117] "RemoveContainer" containerID="d02f5508bccd8ed1dadc049170da963a78caee4176bebe1b53d90327dd61c200" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.672162 4813 scope.go:117] "RemoveContainer" containerID="1662064a66d8580fa819f268f1a2ce9ef5b5c9d2640eb653249ffae22c043fd6" Feb 17 08:45:37 crc kubenswrapper[4813]: E0217 08:45:37.672462 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1662064a66d8580fa819f268f1a2ce9ef5b5c9d2640eb653249ffae22c043fd6\": container with ID starting with 1662064a66d8580fa819f268f1a2ce9ef5b5c9d2640eb653249ffae22c043fd6 not found: ID does not exist" containerID="1662064a66d8580fa819f268f1a2ce9ef5b5c9d2640eb653249ffae22c043fd6" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.672488 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1662064a66d8580fa819f268f1a2ce9ef5b5c9d2640eb653249ffae22c043fd6"} err="failed to get container status \"1662064a66d8580fa819f268f1a2ce9ef5b5c9d2640eb653249ffae22c043fd6\": rpc error: code = NotFound desc = could not find container \"1662064a66d8580fa819f268f1a2ce9ef5b5c9d2640eb653249ffae22c043fd6\": container with ID starting with 1662064a66d8580fa819f268f1a2ce9ef5b5c9d2640eb653249ffae22c043fd6 not found: ID does not exist" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.672509 4813 scope.go:117] "RemoveContainer" containerID="2116145942fec1752312a92527c4d74f372ee8ecc5a3e25fb70c753717d5e013" Feb 17 08:45:37 crc kubenswrapper[4813]: E0217 08:45:37.672758 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2116145942fec1752312a92527c4d74f372ee8ecc5a3e25fb70c753717d5e013\": container with ID starting with 2116145942fec1752312a92527c4d74f372ee8ecc5a3e25fb70c753717d5e013 not found: ID does not exist" containerID="2116145942fec1752312a92527c4d74f372ee8ecc5a3e25fb70c753717d5e013" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.672782 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2116145942fec1752312a92527c4d74f372ee8ecc5a3e25fb70c753717d5e013"} err="failed to get container status \"2116145942fec1752312a92527c4d74f372ee8ecc5a3e25fb70c753717d5e013\": rpc error: code = NotFound desc = could not find container \"2116145942fec1752312a92527c4d74f372ee8ecc5a3e25fb70c753717d5e013\": container with ID starting with 2116145942fec1752312a92527c4d74f372ee8ecc5a3e25fb70c753717d5e013 not found: ID does not exist" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.672795 4813 scope.go:117] "RemoveContainer" containerID="d02f5508bccd8ed1dadc049170da963a78caee4176bebe1b53d90327dd61c200" Feb 17 08:45:37 crc kubenswrapper[4813]: E0217 08:45:37.672990 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d02f5508bccd8ed1dadc049170da963a78caee4176bebe1b53d90327dd61c200\": container with ID starting with d02f5508bccd8ed1dadc049170da963a78caee4176bebe1b53d90327dd61c200 not found: ID does not exist" containerID="d02f5508bccd8ed1dadc049170da963a78caee4176bebe1b53d90327dd61c200" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.673018 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d02f5508bccd8ed1dadc049170da963a78caee4176bebe1b53d90327dd61c200"} err="failed to get container status \"d02f5508bccd8ed1dadc049170da963a78caee4176bebe1b53d90327dd61c200\": rpc error: code = NotFound desc = could not find container \"d02f5508bccd8ed1dadc049170da963a78caee4176bebe1b53d90327dd61c200\": container with ID starting with d02f5508bccd8ed1dadc049170da963a78caee4176bebe1b53d90327dd61c200 not found: ID does not exist" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.673031 4813 scope.go:117] "RemoveContainer" containerID="d037ce5c8afd370190c173ec98769ac0e4da95899b207f0578fbfd1dc4d38b89" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.683666 4813 scope.go:117] "RemoveContainer" containerID="77b7a176934f3ad72e8c51cf0834c606954d6a54689f82f318890249deefea05" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.689758 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.703005 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.704746 4813 scope.go:117] "RemoveContainer" containerID="a6d3617fcfc1613ad54c5a7bc8c19999a88585719b96c8537763829cafefd81a" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.718086 4813 scope.go:117] "RemoveContainer" containerID="d037ce5c8afd370190c173ec98769ac0e4da95899b207f0578fbfd1dc4d38b89" Feb 17 08:45:37 crc kubenswrapper[4813]: E0217 08:45:37.720248 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d037ce5c8afd370190c173ec98769ac0e4da95899b207f0578fbfd1dc4d38b89\": container with ID starting with d037ce5c8afd370190c173ec98769ac0e4da95899b207f0578fbfd1dc4d38b89 not found: ID does not exist" containerID="d037ce5c8afd370190c173ec98769ac0e4da95899b207f0578fbfd1dc4d38b89" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.720395 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d037ce5c8afd370190c173ec98769ac0e4da95899b207f0578fbfd1dc4d38b89"} err="failed to get container status \"d037ce5c8afd370190c173ec98769ac0e4da95899b207f0578fbfd1dc4d38b89\": rpc error: code = NotFound desc = could not find container \"d037ce5c8afd370190c173ec98769ac0e4da95899b207f0578fbfd1dc4d38b89\": container with ID starting with d037ce5c8afd370190c173ec98769ac0e4da95899b207f0578fbfd1dc4d38b89 not found: ID does not exist" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.720515 4813 scope.go:117] "RemoveContainer" containerID="77b7a176934f3ad72e8c51cf0834c606954d6a54689f82f318890249deefea05" Feb 17 08:45:37 crc kubenswrapper[4813]: E0217 08:45:37.721009 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b7a176934f3ad72e8c51cf0834c606954d6a54689f82f318890249deefea05\": container with ID starting with 77b7a176934f3ad72e8c51cf0834c606954d6a54689f82f318890249deefea05 not found: ID does not exist" containerID="77b7a176934f3ad72e8c51cf0834c606954d6a54689f82f318890249deefea05" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.721060 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b7a176934f3ad72e8c51cf0834c606954d6a54689f82f318890249deefea05"} err="failed to get container status \"77b7a176934f3ad72e8c51cf0834c606954d6a54689f82f318890249deefea05\": rpc error: code = NotFound desc = could not find container \"77b7a176934f3ad72e8c51cf0834c606954d6a54689f82f318890249deefea05\": container with ID starting with 77b7a176934f3ad72e8c51cf0834c606954d6a54689f82f318890249deefea05 not found: ID does not exist" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.721093 4813 scope.go:117] "RemoveContainer" containerID="a6d3617fcfc1613ad54c5a7bc8c19999a88585719b96c8537763829cafefd81a" Feb 17 08:45:37 crc kubenswrapper[4813]: E0217 08:45:37.721560 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d3617fcfc1613ad54c5a7bc8c19999a88585719b96c8537763829cafefd81a\": container with ID starting with a6d3617fcfc1613ad54c5a7bc8c19999a88585719b96c8537763829cafefd81a not found: ID does not exist" containerID="a6d3617fcfc1613ad54c5a7bc8c19999a88585719b96c8537763829cafefd81a" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.721587 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d3617fcfc1613ad54c5a7bc8c19999a88585719b96c8537763829cafefd81a"} err="failed to get container status \"a6d3617fcfc1613ad54c5a7bc8c19999a88585719b96c8537763829cafefd81a\": rpc error: code = NotFound desc = could not find container \"a6d3617fcfc1613ad54c5a7bc8c19999a88585719b96c8537763829cafefd81a\": container with ID starting with a6d3617fcfc1613ad54c5a7bc8c19999a88585719b96c8537763829cafefd81a not found: ID does not exist" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.721607 4813 scope.go:117] "RemoveContainer" containerID="bc2fa73bc2d866e1799e9cd3706957babe9eccf0a9ed26dabda33a1c640d0a31" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.734370 4813 scope.go:117] "RemoveContainer" containerID="bc2fa73bc2d866e1799e9cd3706957babe9eccf0a9ed26dabda33a1c640d0a31" Feb 17 08:45:37 crc kubenswrapper[4813]: E0217 08:45:37.734728 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc2fa73bc2d866e1799e9cd3706957babe9eccf0a9ed26dabda33a1c640d0a31\": container with ID starting with bc2fa73bc2d866e1799e9cd3706957babe9eccf0a9ed26dabda33a1c640d0a31 not found: ID does not exist" containerID="bc2fa73bc2d866e1799e9cd3706957babe9eccf0a9ed26dabda33a1c640d0a31" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.734765 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc2fa73bc2d866e1799e9cd3706957babe9eccf0a9ed26dabda33a1c640d0a31"} err="failed to get container status \"bc2fa73bc2d866e1799e9cd3706957babe9eccf0a9ed26dabda33a1c640d0a31\": rpc error: code = NotFound desc = could not find container \"bc2fa73bc2d866e1799e9cd3706957babe9eccf0a9ed26dabda33a1c640d0a31\": container with ID starting with bc2fa73bc2d866e1799e9cd3706957babe9eccf0a9ed26dabda33a1c640d0a31 not found: ID does not exist" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.734795 4813 scope.go:117] "RemoveContainer" containerID="ad4fdf53a758baaa50bd5c737693ab65c95a85d5e157efc5e3549e7c1d3dc9c4" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.746200 4813 scope.go:117] "RemoveContainer" containerID="7c2cb3c9fb68d6d232cd1a79cafa404753117d1be70d1f20dce835f12a38982a" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.779423 4813 scope.go:117] "RemoveContainer" containerID="4c4314bf2eedbfc3fe285f440effd7d5474f69a6c7c4dd19e620c4426bf18f6b" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.795338 4813 scope.go:117] "RemoveContainer" containerID="ad4fdf53a758baaa50bd5c737693ab65c95a85d5e157efc5e3549e7c1d3dc9c4" Feb 17 08:45:37 crc kubenswrapper[4813]: E0217 08:45:37.795909 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad4fdf53a758baaa50bd5c737693ab65c95a85d5e157efc5e3549e7c1d3dc9c4\": container with ID starting with ad4fdf53a758baaa50bd5c737693ab65c95a85d5e157efc5e3549e7c1d3dc9c4 not found: ID does not exist" containerID="ad4fdf53a758baaa50bd5c737693ab65c95a85d5e157efc5e3549e7c1d3dc9c4" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.795935 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad4fdf53a758baaa50bd5c737693ab65c95a85d5e157efc5e3549e7c1d3dc9c4"} err="failed to get container status \"ad4fdf53a758baaa50bd5c737693ab65c95a85d5e157efc5e3549e7c1d3dc9c4\": rpc error: code = NotFound desc = could not find container \"ad4fdf53a758baaa50bd5c737693ab65c95a85d5e157efc5e3549e7c1d3dc9c4\": container with ID starting with ad4fdf53a758baaa50bd5c737693ab65c95a85d5e157efc5e3549e7c1d3dc9c4 not found: ID does not exist" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.795954 4813 scope.go:117] "RemoveContainer" containerID="7c2cb3c9fb68d6d232cd1a79cafa404753117d1be70d1f20dce835f12a38982a" Feb 17 08:45:37 crc kubenswrapper[4813]: E0217 08:45:37.796114 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c2cb3c9fb68d6d232cd1a79cafa404753117d1be70d1f20dce835f12a38982a\": container with ID starting with 7c2cb3c9fb68d6d232cd1a79cafa404753117d1be70d1f20dce835f12a38982a not found: ID does not exist" containerID="7c2cb3c9fb68d6d232cd1a79cafa404753117d1be70d1f20dce835f12a38982a" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.796136 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c2cb3c9fb68d6d232cd1a79cafa404753117d1be70d1f20dce835f12a38982a"} err="failed to get container status \"7c2cb3c9fb68d6d232cd1a79cafa404753117d1be70d1f20dce835f12a38982a\": rpc error: code = NotFound desc = could not find container \"7c2cb3c9fb68d6d232cd1a79cafa404753117d1be70d1f20dce835f12a38982a\": container with ID starting with 7c2cb3c9fb68d6d232cd1a79cafa404753117d1be70d1f20dce835f12a38982a not found: ID does not exist" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.796150 4813 scope.go:117] "RemoveContainer" containerID="4c4314bf2eedbfc3fe285f440effd7d5474f69a6c7c4dd19e620c4426bf18f6b" Feb 17 08:45:37 crc kubenswrapper[4813]: E0217 08:45:37.796325 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c4314bf2eedbfc3fe285f440effd7d5474f69a6c7c4dd19e620c4426bf18f6b\": container with ID starting with 4c4314bf2eedbfc3fe285f440effd7d5474f69a6c7c4dd19e620c4426bf18f6b not found: ID does not exist" containerID="4c4314bf2eedbfc3fe285f440effd7d5474f69a6c7c4dd19e620c4426bf18f6b" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.796345 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c4314bf2eedbfc3fe285f440effd7d5474f69a6c7c4dd19e620c4426bf18f6b"} err="failed to get container status \"4c4314bf2eedbfc3fe285f440effd7d5474f69a6c7c4dd19e620c4426bf18f6b\": rpc error: code = NotFound desc = could not find container \"4c4314bf2eedbfc3fe285f440effd7d5474f69a6c7c4dd19e620c4426bf18f6b\": container with ID starting with 4c4314bf2eedbfc3fe285f440effd7d5474f69a6c7c4dd19e620c4426bf18f6b not found: ID does not exist" Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.860841 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rmz42"] Feb 17 08:45:37 crc kubenswrapper[4813]: I0217 08:45:37.865097 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rmz42"] Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.110599 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8lt56"] Feb 17 08:45:38 crc kubenswrapper[4813]: E0217 08:45:38.111101 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e827307-21c0-4712-ab98-e95d277f4201" containerName="extract-content" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.111202 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e827307-21c0-4712-ab98-e95d277f4201" containerName="extract-content" Feb 17 08:45:38 crc kubenswrapper[4813]: E0217 08:45:38.111282 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d05c6fb-46ad-4722-b20f-c42bba042431" containerName="extract-utilities" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.111375 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d05c6fb-46ad-4722-b20f-c42bba042431" containerName="extract-utilities" Feb 17 08:45:38 crc kubenswrapper[4813]: E0217 08:45:38.111459 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37375f80-f004-4621-b863-326c6e296435" containerName="marketplace-operator" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.111550 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="37375f80-f004-4621-b863-326c6e296435" containerName="marketplace-operator" Feb 17 08:45:38 crc kubenswrapper[4813]: E0217 08:45:38.111627 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e827307-21c0-4712-ab98-e95d277f4201" containerName="extract-utilities" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.111710 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e827307-21c0-4712-ab98-e95d277f4201" containerName="extract-utilities" Feb 17 08:45:38 crc kubenswrapper[4813]: E0217 08:45:38.111785 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d05c6fb-46ad-4722-b20f-c42bba042431" containerName="registry-server" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.111862 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d05c6fb-46ad-4722-b20f-c42bba042431" containerName="registry-server" Feb 17 08:45:38 crc kubenswrapper[4813]: E0217 08:45:38.111944 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0624db-755c-4a56-afd4-02eeb8b8b1db" containerName="registry-server" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.112021 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0624db-755c-4a56-afd4-02eeb8b8b1db" containerName="registry-server" Feb 17 08:45:38 crc kubenswrapper[4813]: E0217 08:45:38.112107 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0624db-755c-4a56-afd4-02eeb8b8b1db" containerName="extract-content" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.112187 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0624db-755c-4a56-afd4-02eeb8b8b1db" containerName="extract-content" Feb 17 08:45:38 crc kubenswrapper[4813]: E0217 08:45:38.112286 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064c46bd-0e88-4dca-9a42-923b3eae48a1" containerName="extract-content" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.112381 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="064c46bd-0e88-4dca-9a42-923b3eae48a1" containerName="extract-content" Feb 17 08:45:38 crc kubenswrapper[4813]: E0217 08:45:38.112471 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064c46bd-0e88-4dca-9a42-923b3eae48a1" containerName="registry-server" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.112550 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="064c46bd-0e88-4dca-9a42-923b3eae48a1" containerName="registry-server" Feb 17 08:45:38 crc kubenswrapper[4813]: E0217 08:45:38.112630 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e827307-21c0-4712-ab98-e95d277f4201" containerName="registry-server" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.112713 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e827307-21c0-4712-ab98-e95d277f4201" containerName="registry-server" Feb 17 08:45:38 crc kubenswrapper[4813]: E0217 08:45:38.112795 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f75abd-b269-471e-bf17-66e5c0afb5dd" containerName="installer" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.112809 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f75abd-b269-471e-bf17-66e5c0afb5dd" containerName="installer" Feb 17 08:45:38 crc kubenswrapper[4813]: E0217 08:45:38.112822 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064c46bd-0e88-4dca-9a42-923b3eae48a1" containerName="extract-utilities" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.112830 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="064c46bd-0e88-4dca-9a42-923b3eae48a1" containerName="extract-utilities" Feb 17 08:45:38 crc kubenswrapper[4813]: E0217 08:45:38.112842 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0624db-755c-4a56-afd4-02eeb8b8b1db" containerName="extract-utilities" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.112849 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0624db-755c-4a56-afd4-02eeb8b8b1db" containerName="extract-utilities" Feb 17 08:45:38 crc kubenswrapper[4813]: E0217 08:45:38.112861 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d05c6fb-46ad-4722-b20f-c42bba042431" containerName="extract-content" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.112868 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d05c6fb-46ad-4722-b20f-c42bba042431" containerName="extract-content" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.112993 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d0624db-755c-4a56-afd4-02eeb8b8b1db" containerName="registry-server" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.113004 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f75abd-b269-471e-bf17-66e5c0afb5dd" containerName="installer" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.113016 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="37375f80-f004-4621-b863-326c6e296435" containerName="marketplace-operator" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.113026 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e827307-21c0-4712-ab98-e95d277f4201" containerName="registry-server" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.113039 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="064c46bd-0e88-4dca-9a42-923b3eae48a1" containerName="registry-server" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.113051 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d05c6fb-46ad-4722-b20f-c42bba042431" containerName="registry-server" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.113561 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8lt56" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.116947 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.117201 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.117558 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.118637 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.121175 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8lt56"] Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.137144 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.184510 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.213863 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8"] Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.214653 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.216487 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.216746 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.221115 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8"] Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.244619 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvs77\" (UniqueName: \"kubernetes.io/projected/ede3523a-8348-4d9f-871d-ba4c36857ac4-kube-api-access-bvs77\") pod \"marketplace-operator-79b997595-8lt56\" (UID: \"ede3523a-8348-4d9f-871d-ba4c36857ac4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8lt56" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.244719 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ede3523a-8348-4d9f-871d-ba4c36857ac4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8lt56\" (UID: \"ede3523a-8348-4d9f-871d-ba4c36857ac4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8lt56" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.244776 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ede3523a-8348-4d9f-871d-ba4c36857ac4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8lt56\" (UID: \"ede3523a-8348-4d9f-871d-ba4c36857ac4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8lt56" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.345956 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d09f533d-a044-43e1-a0ae-19756fd861ca-config-volume\") pod \"collect-profiles-29521965-xk2c8\" (UID: \"d09f533d-a044-43e1-a0ae-19756fd861ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.346275 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ede3523a-8348-4d9f-871d-ba4c36857ac4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8lt56\" (UID: \"ede3523a-8348-4d9f-871d-ba4c36857ac4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8lt56" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.346433 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ede3523a-8348-4d9f-871d-ba4c36857ac4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8lt56\" (UID: \"ede3523a-8348-4d9f-871d-ba4c36857ac4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8lt56" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.346543 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d09f533d-a044-43e1-a0ae-19756fd861ca-secret-volume\") pod \"collect-profiles-29521965-xk2c8\" (UID: \"d09f533d-a044-43e1-a0ae-19756fd861ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.346626 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvs77\" (UniqueName: \"kubernetes.io/projected/ede3523a-8348-4d9f-871d-ba4c36857ac4-kube-api-access-bvs77\") pod \"marketplace-operator-79b997595-8lt56\" (UID: \"ede3523a-8348-4d9f-871d-ba4c36857ac4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8lt56" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.346712 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m87zt\" (UniqueName: \"kubernetes.io/projected/d09f533d-a044-43e1-a0ae-19756fd861ca-kube-api-access-m87zt\") pod \"collect-profiles-29521965-xk2c8\" (UID: \"d09f533d-a044-43e1-a0ae-19756fd861ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.347783 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ede3523a-8348-4d9f-871d-ba4c36857ac4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8lt56\" (UID: \"ede3523a-8348-4d9f-871d-ba4c36857ac4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8lt56" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.349740 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ede3523a-8348-4d9f-871d-ba4c36857ac4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8lt56\" (UID: \"ede3523a-8348-4d9f-871d-ba4c36857ac4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8lt56" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.361713 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvs77\" (UniqueName: \"kubernetes.io/projected/ede3523a-8348-4d9f-871d-ba4c36857ac4-kube-api-access-bvs77\") pod \"marketplace-operator-79b997595-8lt56\" (UID: \"ede3523a-8348-4d9f-871d-ba4c36857ac4\") " pod="openshift-marketplace/marketplace-operator-79b997595-8lt56" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.430025 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8lt56" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.449372 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d09f533d-a044-43e1-a0ae-19756fd861ca-secret-volume\") pod \"collect-profiles-29521965-xk2c8\" (UID: \"d09f533d-a044-43e1-a0ae-19756fd861ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.449434 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m87zt\" (UniqueName: \"kubernetes.io/projected/d09f533d-a044-43e1-a0ae-19756fd861ca-kube-api-access-m87zt\") pod \"collect-profiles-29521965-xk2c8\" (UID: \"d09f533d-a044-43e1-a0ae-19756fd861ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.449495 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d09f533d-a044-43e1-a0ae-19756fd861ca-config-volume\") pod \"collect-profiles-29521965-xk2c8\" (UID: \"d09f533d-a044-43e1-a0ae-19756fd861ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.450607 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d09f533d-a044-43e1-a0ae-19756fd861ca-config-volume\") pod \"collect-profiles-29521965-xk2c8\" (UID: \"d09f533d-a044-43e1-a0ae-19756fd861ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.454507 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d09f533d-a044-43e1-a0ae-19756fd861ca-secret-volume\") pod \"collect-profiles-29521965-xk2c8\" (UID: \"d09f533d-a044-43e1-a0ae-19756fd861ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.478057 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m87zt\" (UniqueName: \"kubernetes.io/projected/d09f533d-a044-43e1-a0ae-19756fd861ca-kube-api-access-m87zt\") pod \"collect-profiles-29521965-xk2c8\" (UID: \"d09f533d-a044-43e1-a0ae-19756fd861ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.534553 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8" Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.629682 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8lt56"] Feb 17 08:45:38 crc kubenswrapper[4813]: I0217 08:45:38.711799 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8"] Feb 17 08:45:38 crc kubenswrapper[4813]: W0217 08:45:38.729546 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd09f533d_a044_43e1_a0ae_19756fd861ca.slice/crio-10178c93618cb7d1954f14decaaf5efd33a245e9d646846ff83bf0dbda2b45a4 WatchSource:0}: Error finding container 10178c93618cb7d1954f14decaaf5efd33a245e9d646846ff83bf0dbda2b45a4: Status 404 returned error can't find the container with id 10178c93618cb7d1954f14decaaf5efd33a245e9d646846ff83bf0dbda2b45a4 Feb 17 08:45:39 crc kubenswrapper[4813]: I0217 08:45:39.117332 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="064c46bd-0e88-4dca-9a42-923b3eae48a1" path="/var/lib/kubelet/pods/064c46bd-0e88-4dca-9a42-923b3eae48a1/volumes" Feb 17 08:45:39 crc kubenswrapper[4813]: I0217 08:45:39.118354 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d0624db-755c-4a56-afd4-02eeb8b8b1db" path="/var/lib/kubelet/pods/0d0624db-755c-4a56-afd4-02eeb8b8b1db/volumes" Feb 17 08:45:39 crc kubenswrapper[4813]: I0217 08:45:39.118898 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d05c6fb-46ad-4722-b20f-c42bba042431" path="/var/lib/kubelet/pods/2d05c6fb-46ad-4722-b20f-c42bba042431/volumes" Feb 17 08:45:39 crc kubenswrapper[4813]: I0217 08:45:39.119494 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37375f80-f004-4621-b863-326c6e296435" path="/var/lib/kubelet/pods/37375f80-f004-4621-b863-326c6e296435/volumes" Feb 17 08:45:39 crc kubenswrapper[4813]: I0217 08:45:39.119910 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e827307-21c0-4712-ab98-e95d277f4201" path="/var/lib/kubelet/pods/8e827307-21c0-4712-ab98-e95d277f4201/volumes" Feb 17 08:45:39 crc kubenswrapper[4813]: I0217 08:45:39.180843 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 08:45:39 crc kubenswrapper[4813]: I0217 08:45:39.540776 4813 generic.go:334] "Generic (PLEG): container finished" podID="d09f533d-a044-43e1-a0ae-19756fd861ca" containerID="feb42dec6b51db5bee253d39d77df4fb84946ef245ec8a31ce1cfa62ce9070e0" exitCode=0 Feb 17 08:45:39 crc kubenswrapper[4813]: I0217 08:45:39.540875 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8" event={"ID":"d09f533d-a044-43e1-a0ae-19756fd861ca","Type":"ContainerDied","Data":"feb42dec6b51db5bee253d39d77df4fb84946ef245ec8a31ce1cfa62ce9070e0"} Feb 17 08:45:39 crc kubenswrapper[4813]: I0217 08:45:39.540913 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8" event={"ID":"d09f533d-a044-43e1-a0ae-19756fd861ca","Type":"ContainerStarted","Data":"10178c93618cb7d1954f14decaaf5efd33a245e9d646846ff83bf0dbda2b45a4"} Feb 17 08:45:39 crc kubenswrapper[4813]: I0217 08:45:39.542498 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8lt56" event={"ID":"ede3523a-8348-4d9f-871d-ba4c36857ac4","Type":"ContainerStarted","Data":"52d3d81c020c9962823c867ff425a48473bd8b809f868fab2e0721839dac1087"} Feb 17 08:45:39 crc kubenswrapper[4813]: I0217 08:45:39.542530 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8lt56" event={"ID":"ede3523a-8348-4d9f-871d-ba4c36857ac4","Type":"ContainerStarted","Data":"17660230019c3530231b50443c9e6bf859d4d04a0e9260236f1625192bd520b8"} Feb 17 08:45:39 crc kubenswrapper[4813]: I0217 08:45:39.542762 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8lt56" Feb 17 08:45:39 crc kubenswrapper[4813]: I0217 08:45:39.546412 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8lt56" Feb 17 08:45:39 crc kubenswrapper[4813]: I0217 08:45:39.566824 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8lt56" podStartSLOduration=1.56680599 podStartE2EDuration="1.56680599s" podCreationTimestamp="2026-02-17 08:45:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:45:39.566191813 +0000 UTC m=+287.226953076" watchObservedRunningTime="2026-02-17 08:45:39.56680599 +0000 UTC m=+287.227567213" Feb 17 08:45:40 crc kubenswrapper[4813]: I0217 08:45:40.823625 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8" Feb 17 08:45:40 crc kubenswrapper[4813]: I0217 08:45:40.977746 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d09f533d-a044-43e1-a0ae-19756fd861ca-config-volume\") pod \"d09f533d-a044-43e1-a0ae-19756fd861ca\" (UID: \"d09f533d-a044-43e1-a0ae-19756fd861ca\") " Feb 17 08:45:40 crc kubenswrapper[4813]: I0217 08:45:40.977870 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m87zt\" (UniqueName: \"kubernetes.io/projected/d09f533d-a044-43e1-a0ae-19756fd861ca-kube-api-access-m87zt\") pod \"d09f533d-a044-43e1-a0ae-19756fd861ca\" (UID: \"d09f533d-a044-43e1-a0ae-19756fd861ca\") " Feb 17 08:45:40 crc kubenswrapper[4813]: I0217 08:45:40.977942 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d09f533d-a044-43e1-a0ae-19756fd861ca-secret-volume\") pod \"d09f533d-a044-43e1-a0ae-19756fd861ca\" (UID: \"d09f533d-a044-43e1-a0ae-19756fd861ca\") " Feb 17 08:45:40 crc kubenswrapper[4813]: I0217 08:45:40.978450 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d09f533d-a044-43e1-a0ae-19756fd861ca-config-volume" (OuterVolumeSpecName: "config-volume") pod "d09f533d-a044-43e1-a0ae-19756fd861ca" (UID: "d09f533d-a044-43e1-a0ae-19756fd861ca"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:45:40 crc kubenswrapper[4813]: I0217 08:45:40.983045 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d09f533d-a044-43e1-a0ae-19756fd861ca-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d09f533d-a044-43e1-a0ae-19756fd861ca" (UID: "d09f533d-a044-43e1-a0ae-19756fd861ca"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:45:40 crc kubenswrapper[4813]: I0217 08:45:40.983432 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d09f533d-a044-43e1-a0ae-19756fd861ca-kube-api-access-m87zt" (OuterVolumeSpecName: "kube-api-access-m87zt") pod "d09f533d-a044-43e1-a0ae-19756fd861ca" (UID: "d09f533d-a044-43e1-a0ae-19756fd861ca"). InnerVolumeSpecName "kube-api-access-m87zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:45:41 crc kubenswrapper[4813]: I0217 08:45:41.079817 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d09f533d-a044-43e1-a0ae-19756fd861ca-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:41 crc kubenswrapper[4813]: I0217 08:45:41.079855 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d09f533d-a044-43e1-a0ae-19756fd861ca-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:41 crc kubenswrapper[4813]: I0217 08:45:41.079871 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m87zt\" (UniqueName: \"kubernetes.io/projected/d09f533d-a044-43e1-a0ae-19756fd861ca-kube-api-access-m87zt\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:41 crc kubenswrapper[4813]: I0217 08:45:41.557348 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8" Feb 17 08:45:41 crc kubenswrapper[4813]: I0217 08:45:41.557411 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521965-xk2c8" event={"ID":"d09f533d-a044-43e1-a0ae-19756fd861ca","Type":"ContainerDied","Data":"10178c93618cb7d1954f14decaaf5efd33a245e9d646846ff83bf0dbda2b45a4"} Feb 17 08:45:41 crc kubenswrapper[4813]: I0217 08:45:41.557506 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10178c93618cb7d1954f14decaaf5efd33a245e9d646846ff83bf0dbda2b45a4" Feb 17 08:45:44 crc kubenswrapper[4813]: I0217 08:45:44.133739 4813 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 08:45:44 crc kubenswrapper[4813]: I0217 08:45:44.135475 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://46603adae48d144608c4e88831928dd9429e74024a7de4e19a8584eedc7715c7" gracePeriod=5 Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.603224 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.603803 4813 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="46603adae48d144608c4e88831928dd9429e74024a7de4e19a8584eedc7715c7" exitCode=137 Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.741502 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.741591 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.901436 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.901489 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.901548 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.901565 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.901638 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.901679 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.901761 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.901753 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.901809 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.901996 4813 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.902037 4813 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.902051 4813 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.902061 4813 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:49 crc kubenswrapper[4813]: I0217 08:45:49.913573 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:45:50 crc kubenswrapper[4813]: I0217 08:45:50.002796 4813 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:50 crc kubenswrapper[4813]: I0217 08:45:50.612521 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 08:45:50 crc kubenswrapper[4813]: I0217 08:45:50.612655 4813 scope.go:117] "RemoveContainer" containerID="46603adae48d144608c4e88831928dd9429e74024a7de4e19a8584eedc7715c7" Feb 17 08:45:50 crc kubenswrapper[4813]: I0217 08:45:50.612747 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 08:45:51 crc kubenswrapper[4813]: I0217 08:45:51.116657 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 17 08:45:52 crc kubenswrapper[4813]: I0217 08:45:52.870742 4813 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.410447 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fkwkx"] Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.411258 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" podUID="9f185fcc-0363-430b-a331-0e8ea791f9f6" containerName="controller-manager" containerID="cri-o://266616d216b25dee2694d0a3a2c909b1b11d7d10936e90156fa3d2227e26fe50" gracePeriod=30 Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.509217 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8"] Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.509501 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" podUID="d11448d0-33b5-4b7e-ab63-bf30a82b0cb2" containerName="route-controller-manager" containerID="cri-o://534d4bb8287cb5216564afabece691d693b9f0df6771a7c99daeb61827727b12" gracePeriod=30 Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.666049 4813 generic.go:334] "Generic (PLEG): container finished" podID="d11448d0-33b5-4b7e-ab63-bf30a82b0cb2" containerID="534d4bb8287cb5216564afabece691d693b9f0df6771a7c99daeb61827727b12" exitCode=0 Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.666154 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" event={"ID":"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2","Type":"ContainerDied","Data":"534d4bb8287cb5216564afabece691d693b9f0df6771a7c99daeb61827727b12"} Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.671452 4813 generic.go:334] "Generic (PLEG): container finished" podID="9f185fcc-0363-430b-a331-0e8ea791f9f6" containerID="266616d216b25dee2694d0a3a2c909b1b11d7d10936e90156fa3d2227e26fe50" exitCode=0 Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.671505 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" event={"ID":"9f185fcc-0363-430b-a331-0e8ea791f9f6","Type":"ContainerDied","Data":"266616d216b25dee2694d0a3a2c909b1b11d7d10936e90156fa3d2227e26fe50"} Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.777205 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.801084 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-client-ca\") pod \"9f185fcc-0363-430b-a331-0e8ea791f9f6\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.801176 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-proxy-ca-bundles\") pod \"9f185fcc-0363-430b-a331-0e8ea791f9f6\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.801203 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f185fcc-0363-430b-a331-0e8ea791f9f6-serving-cert\") pod \"9f185fcc-0363-430b-a331-0e8ea791f9f6\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.801251 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-config\") pod \"9f185fcc-0363-430b-a331-0e8ea791f9f6\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.801340 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2bqf\" (UniqueName: \"kubernetes.io/projected/9f185fcc-0363-430b-a331-0e8ea791f9f6-kube-api-access-f2bqf\") pod \"9f185fcc-0363-430b-a331-0e8ea791f9f6\" (UID: \"9f185fcc-0363-430b-a331-0e8ea791f9f6\") " Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.801954 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-client-ca" (OuterVolumeSpecName: "client-ca") pod "9f185fcc-0363-430b-a331-0e8ea791f9f6" (UID: "9f185fcc-0363-430b-a331-0e8ea791f9f6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.804218 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9f185fcc-0363-430b-a331-0e8ea791f9f6" (UID: "9f185fcc-0363-430b-a331-0e8ea791f9f6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.804252 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-config" (OuterVolumeSpecName: "config") pod "9f185fcc-0363-430b-a331-0e8ea791f9f6" (UID: "9f185fcc-0363-430b-a331-0e8ea791f9f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.815006 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f185fcc-0363-430b-a331-0e8ea791f9f6-kube-api-access-f2bqf" (OuterVolumeSpecName: "kube-api-access-f2bqf") pod "9f185fcc-0363-430b-a331-0e8ea791f9f6" (UID: "9f185fcc-0363-430b-a331-0e8ea791f9f6"). InnerVolumeSpecName "kube-api-access-f2bqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.816417 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f185fcc-0363-430b-a331-0e8ea791f9f6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9f185fcc-0363-430b-a331-0e8ea791f9f6" (UID: "9f185fcc-0363-430b-a331-0e8ea791f9f6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.847894 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.901682 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zcdq\" (UniqueName: \"kubernetes.io/projected/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-kube-api-access-2zcdq\") pod \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\" (UID: \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\") " Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.901740 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-serving-cert\") pod \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\" (UID: \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\") " Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.901782 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-config\") pod \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\" (UID: \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\") " Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.901823 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-client-ca\") pod \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\" (UID: \"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2\") " Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.902007 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2bqf\" (UniqueName: \"kubernetes.io/projected/9f185fcc-0363-430b-a331-0e8ea791f9f6-kube-api-access-f2bqf\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.902025 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.902037 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.902048 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f185fcc-0363-430b-a331-0e8ea791f9f6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.902059 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f185fcc-0363-430b-a331-0e8ea791f9f6-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.902835 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-client-ca" (OuterVolumeSpecName: "client-ca") pod "d11448d0-33b5-4b7e-ab63-bf30a82b0cb2" (UID: "d11448d0-33b5-4b7e-ab63-bf30a82b0cb2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.902943 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-config" (OuterVolumeSpecName: "config") pod "d11448d0-33b5-4b7e-ab63-bf30a82b0cb2" (UID: "d11448d0-33b5-4b7e-ab63-bf30a82b0cb2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.904815 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d11448d0-33b5-4b7e-ab63-bf30a82b0cb2" (UID: "d11448d0-33b5-4b7e-ab63-bf30a82b0cb2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:45:59 crc kubenswrapper[4813]: I0217 08:45:59.907409 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-kube-api-access-2zcdq" (OuterVolumeSpecName: "kube-api-access-2zcdq") pod "d11448d0-33b5-4b7e-ab63-bf30a82b0cb2" (UID: "d11448d0-33b5-4b7e-ab63-bf30a82b0cb2"). InnerVolumeSpecName "kube-api-access-2zcdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.002569 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zcdq\" (UniqueName: \"kubernetes.io/projected/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-kube-api-access-2zcdq\") on node \"crc\" DevicePath \"\"" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.002607 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.002619 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.002632 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.573227 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx"] Feb 17 08:46:00 crc kubenswrapper[4813]: E0217 08:46:00.573610 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.573631 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 08:46:00 crc kubenswrapper[4813]: E0217 08:46:00.573653 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11448d0-33b5-4b7e-ab63-bf30a82b0cb2" containerName="route-controller-manager" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.573667 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11448d0-33b5-4b7e-ab63-bf30a82b0cb2" containerName="route-controller-manager" Feb 17 08:46:00 crc kubenswrapper[4813]: E0217 08:46:00.573698 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f185fcc-0363-430b-a331-0e8ea791f9f6" containerName="controller-manager" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.573724 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f185fcc-0363-430b-a331-0e8ea791f9f6" containerName="controller-manager" Feb 17 08:46:00 crc kubenswrapper[4813]: E0217 08:46:00.573745 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d09f533d-a044-43e1-a0ae-19756fd861ca" containerName="collect-profiles" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.573757 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d09f533d-a044-43e1-a0ae-19756fd861ca" containerName="collect-profiles" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.573936 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d09f533d-a044-43e1-a0ae-19756fd861ca" containerName="collect-profiles" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.573963 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11448d0-33b5-4b7e-ab63-bf30a82b0cb2" containerName="route-controller-manager" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.573980 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.573996 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f185fcc-0363-430b-a331-0e8ea791f9f6" containerName="controller-manager" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.574524 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.583381 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d6d847d76-r7qjv"] Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.584358 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.589249 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx"] Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.611761 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/062bbcea-d872-49af-8a7c-aab743110b97-serving-cert\") pod \"controller-manager-7d6d847d76-r7qjv\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.611808 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d6d847d76-r7qjv"] Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.611848 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sphlt\" (UniqueName: \"kubernetes.io/projected/062bbcea-d872-49af-8a7c-aab743110b97-kube-api-access-sphlt\") pod \"controller-manager-7d6d847d76-r7qjv\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.611897 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-proxy-ca-bundles\") pod \"controller-manager-7d6d847d76-r7qjv\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.611928 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ffe5179-7521-48ff-95ab-a06b983b2b25-client-ca\") pod \"route-controller-manager-6fcf8b4c5d-przsx\" (UID: \"1ffe5179-7521-48ff-95ab-a06b983b2b25\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.612017 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n5ff\" (UniqueName: \"kubernetes.io/projected/1ffe5179-7521-48ff-95ab-a06b983b2b25-kube-api-access-9n5ff\") pod \"route-controller-manager-6fcf8b4c5d-przsx\" (UID: \"1ffe5179-7521-48ff-95ab-a06b983b2b25\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.612082 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ffe5179-7521-48ff-95ab-a06b983b2b25-serving-cert\") pod \"route-controller-manager-6fcf8b4c5d-przsx\" (UID: \"1ffe5179-7521-48ff-95ab-a06b983b2b25\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.612113 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-client-ca\") pod \"controller-manager-7d6d847d76-r7qjv\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.612137 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-config\") pod \"controller-manager-7d6d847d76-r7qjv\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.612159 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffe5179-7521-48ff-95ab-a06b983b2b25-config\") pod \"route-controller-manager-6fcf8b4c5d-przsx\" (UID: \"1ffe5179-7521-48ff-95ab-a06b983b2b25\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.676735 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" event={"ID":"d11448d0-33b5-4b7e-ab63-bf30a82b0cb2","Type":"ContainerDied","Data":"ce7eb9d41cfea79f398de592decb8b44b973f6f1bb312d58172bcf347193c18a"} Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.676792 4813 scope.go:117] "RemoveContainer" containerID="534d4bb8287cb5216564afabece691d693b9f0df6771a7c99daeb61827727b12" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.676794 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.678081 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" event={"ID":"9f185fcc-0363-430b-a331-0e8ea791f9f6","Type":"ContainerDied","Data":"f1e91e25c895a73029f1085755deb5d9e7366ec10c742c425a1b07b9ba7b6139"} Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.678161 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fkwkx" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.700639 4813 scope.go:117] "RemoveContainer" containerID="266616d216b25dee2694d0a3a2c909b1b11d7d10936e90156fa3d2227e26fe50" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.706782 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fkwkx"] Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.709325 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fkwkx"] Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.712851 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sphlt\" (UniqueName: \"kubernetes.io/projected/062bbcea-d872-49af-8a7c-aab743110b97-kube-api-access-sphlt\") pod \"controller-manager-7d6d847d76-r7qjv\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.712900 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-proxy-ca-bundles\") pod \"controller-manager-7d6d847d76-r7qjv\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.712920 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ffe5179-7521-48ff-95ab-a06b983b2b25-client-ca\") pod \"route-controller-manager-6fcf8b4c5d-przsx\" (UID: \"1ffe5179-7521-48ff-95ab-a06b983b2b25\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.712944 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n5ff\" (UniqueName: \"kubernetes.io/projected/1ffe5179-7521-48ff-95ab-a06b983b2b25-kube-api-access-9n5ff\") pod \"route-controller-manager-6fcf8b4c5d-przsx\" (UID: \"1ffe5179-7521-48ff-95ab-a06b983b2b25\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.712975 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ffe5179-7521-48ff-95ab-a06b983b2b25-serving-cert\") pod \"route-controller-manager-6fcf8b4c5d-przsx\" (UID: \"1ffe5179-7521-48ff-95ab-a06b983b2b25\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.712997 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-client-ca\") pod \"controller-manager-7d6d847d76-r7qjv\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.714050 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-config\") pod \"controller-manager-7d6d847d76-r7qjv\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.713808 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ffe5179-7521-48ff-95ab-a06b983b2b25-client-ca\") pod \"route-controller-manager-6fcf8b4c5d-przsx\" (UID: \"1ffe5179-7521-48ff-95ab-a06b983b2b25\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.714078 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffe5179-7521-48ff-95ab-a06b983b2b25-config\") pod \"route-controller-manager-6fcf8b4c5d-przsx\" (UID: \"1ffe5179-7521-48ff-95ab-a06b983b2b25\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.714228 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/062bbcea-d872-49af-8a7c-aab743110b97-serving-cert\") pod \"controller-manager-7d6d847d76-r7qjv\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.714237 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-proxy-ca-bundles\") pod \"controller-manager-7d6d847d76-r7qjv\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.714861 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-client-ca\") pod \"controller-manager-7d6d847d76-r7qjv\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.716015 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffe5179-7521-48ff-95ab-a06b983b2b25-config\") pod \"route-controller-manager-6fcf8b4c5d-przsx\" (UID: \"1ffe5179-7521-48ff-95ab-a06b983b2b25\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.716139 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-config\") pod \"controller-manager-7d6d847d76-r7qjv\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.718448 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ffe5179-7521-48ff-95ab-a06b983b2b25-serving-cert\") pod \"route-controller-manager-6fcf8b4c5d-przsx\" (UID: \"1ffe5179-7521-48ff-95ab-a06b983b2b25\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.722246 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/062bbcea-d872-49af-8a7c-aab743110b97-serving-cert\") pod \"controller-manager-7d6d847d76-r7qjv\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.734944 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sphlt\" (UniqueName: \"kubernetes.io/projected/062bbcea-d872-49af-8a7c-aab743110b97-kube-api-access-sphlt\") pod \"controller-manager-7d6d847d76-r7qjv\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.735800 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n5ff\" (UniqueName: \"kubernetes.io/projected/1ffe5179-7521-48ff-95ab-a06b983b2b25-kube-api-access-9n5ff\") pod \"route-controller-manager-6fcf8b4c5d-przsx\" (UID: \"1ffe5179-7521-48ff-95ab-a06b983b2b25\") " pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.784282 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8"] Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.789074 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-42mx8"] Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.918216 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:00 crc kubenswrapper[4813]: I0217 08:46:00.927070 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:01 crc kubenswrapper[4813]: I0217 08:46:01.124695 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f185fcc-0363-430b-a331-0e8ea791f9f6" path="/var/lib/kubelet/pods/9f185fcc-0363-430b-a331-0e8ea791f9f6/volumes" Feb 17 08:46:01 crc kubenswrapper[4813]: I0217 08:46:01.126140 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11448d0-33b5-4b7e-ab63-bf30a82b0cb2" path="/var/lib/kubelet/pods/d11448d0-33b5-4b7e-ab63-bf30a82b0cb2/volumes" Feb 17 08:46:01 crc kubenswrapper[4813]: I0217 08:46:01.237290 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx"] Feb 17 08:46:01 crc kubenswrapper[4813]: I0217 08:46:01.251035 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d6d847d76-r7qjv"] Feb 17 08:46:01 crc kubenswrapper[4813]: W0217 08:46:01.256635 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod062bbcea_d872_49af_8a7c_aab743110b97.slice/crio-4c46f9bac87f90e4f7f4dfe477853157d8b9dae00968ab98b7e5abf3dfa87453 WatchSource:0}: Error finding container 4c46f9bac87f90e4f7f4dfe477853157d8b9dae00968ab98b7e5abf3dfa87453: Status 404 returned error can't find the container with id 4c46f9bac87f90e4f7f4dfe477853157d8b9dae00968ab98b7e5abf3dfa87453 Feb 17 08:46:01 crc kubenswrapper[4813]: I0217 08:46:01.683638 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" event={"ID":"062bbcea-d872-49af-8a7c-aab743110b97","Type":"ContainerStarted","Data":"76cc743e5f438a6f91e2591e8fdccded0848a5c1f31bdde60f7029ebc0d39008"} Feb 17 08:46:01 crc kubenswrapper[4813]: I0217 08:46:01.683991 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" event={"ID":"062bbcea-d872-49af-8a7c-aab743110b97","Type":"ContainerStarted","Data":"4c46f9bac87f90e4f7f4dfe477853157d8b9dae00968ab98b7e5abf3dfa87453"} Feb 17 08:46:01 crc kubenswrapper[4813]: I0217 08:46:01.684703 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:01 crc kubenswrapper[4813]: I0217 08:46:01.688743 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" event={"ID":"1ffe5179-7521-48ff-95ab-a06b983b2b25","Type":"ContainerStarted","Data":"573adc47f31e88c3c8d16ccbc56a645ead938c396c0d67aca1b47087158920f0"} Feb 17 08:46:01 crc kubenswrapper[4813]: I0217 08:46:01.688772 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" event={"ID":"1ffe5179-7521-48ff-95ab-a06b983b2b25","Type":"ContainerStarted","Data":"d4fc5d4715291e0692a0bfe68be3b245411379f40f82c6b567f86d5cfa04c880"} Feb 17 08:46:01 crc kubenswrapper[4813]: I0217 08:46:01.689038 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:01 crc kubenswrapper[4813]: I0217 08:46:01.695325 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:01 crc kubenswrapper[4813]: I0217 08:46:01.727228 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" podStartSLOduration=2.727211465 podStartE2EDuration="2.727211465s" podCreationTimestamp="2026-02-17 08:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:46:01.709414039 +0000 UTC m=+309.370175282" watchObservedRunningTime="2026-02-17 08:46:01.727211465 +0000 UTC m=+309.387972688" Feb 17 08:46:01 crc kubenswrapper[4813]: I0217 08:46:01.727660 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" podStartSLOduration=2.727655177 podStartE2EDuration="2.727655177s" podCreationTimestamp="2026-02-17 08:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:46:01.72737921 +0000 UTC m=+309.388140433" watchObservedRunningTime="2026-02-17 08:46:01.727655177 +0000 UTC m=+309.388416390" Feb 17 08:46:02 crc kubenswrapper[4813]: I0217 08:46:02.239853 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:15 crc kubenswrapper[4813]: I0217 08:46:15.558220 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d6d847d76-r7qjv"] Feb 17 08:46:15 crc kubenswrapper[4813]: I0217 08:46:15.559720 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" podUID="062bbcea-d872-49af-8a7c-aab743110b97" containerName="controller-manager" containerID="cri-o://76cc743e5f438a6f91e2591e8fdccded0848a5c1f31bdde60f7029ebc0d39008" gracePeriod=30 Feb 17 08:46:15 crc kubenswrapper[4813]: I0217 08:46:15.584227 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx"] Feb 17 08:46:15 crc kubenswrapper[4813]: I0217 08:46:15.586640 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" podUID="1ffe5179-7521-48ff-95ab-a06b983b2b25" containerName="route-controller-manager" containerID="cri-o://573adc47f31e88c3c8d16ccbc56a645ead938c396c0d67aca1b47087158920f0" gracePeriod=30 Feb 17 08:46:15 crc kubenswrapper[4813]: I0217 08:46:15.783946 4813 generic.go:334] "Generic (PLEG): container finished" podID="062bbcea-d872-49af-8a7c-aab743110b97" containerID="76cc743e5f438a6f91e2591e8fdccded0848a5c1f31bdde60f7029ebc0d39008" exitCode=0 Feb 17 08:46:15 crc kubenswrapper[4813]: I0217 08:46:15.784057 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" event={"ID":"062bbcea-d872-49af-8a7c-aab743110b97","Type":"ContainerDied","Data":"76cc743e5f438a6f91e2591e8fdccded0848a5c1f31bdde60f7029ebc0d39008"} Feb 17 08:46:15 crc kubenswrapper[4813]: I0217 08:46:15.786361 4813 generic.go:334] "Generic (PLEG): container finished" podID="1ffe5179-7521-48ff-95ab-a06b983b2b25" containerID="573adc47f31e88c3c8d16ccbc56a645ead938c396c0d67aca1b47087158920f0" exitCode=0 Feb 17 08:46:15 crc kubenswrapper[4813]: I0217 08:46:15.786417 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" event={"ID":"1ffe5179-7521-48ff-95ab-a06b983b2b25","Type":"ContainerDied","Data":"573adc47f31e88c3c8d16ccbc56a645ead938c396c0d67aca1b47087158920f0"} Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.036685 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.043227 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.070268 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ffe5179-7521-48ff-95ab-a06b983b2b25-client-ca\") pod \"1ffe5179-7521-48ff-95ab-a06b983b2b25\" (UID: \"1ffe5179-7521-48ff-95ab-a06b983b2b25\") " Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.070374 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/062bbcea-d872-49af-8a7c-aab743110b97-serving-cert\") pod \"062bbcea-d872-49af-8a7c-aab743110b97\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.070471 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-proxy-ca-bundles\") pod \"062bbcea-d872-49af-8a7c-aab743110b97\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.070509 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n5ff\" (UniqueName: \"kubernetes.io/projected/1ffe5179-7521-48ff-95ab-a06b983b2b25-kube-api-access-9n5ff\") pod \"1ffe5179-7521-48ff-95ab-a06b983b2b25\" (UID: \"1ffe5179-7521-48ff-95ab-a06b983b2b25\") " Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.070557 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-client-ca\") pod \"062bbcea-d872-49af-8a7c-aab743110b97\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.070618 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ffe5179-7521-48ff-95ab-a06b983b2b25-serving-cert\") pod \"1ffe5179-7521-48ff-95ab-a06b983b2b25\" (UID: \"1ffe5179-7521-48ff-95ab-a06b983b2b25\") " Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.070648 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sphlt\" (UniqueName: \"kubernetes.io/projected/062bbcea-d872-49af-8a7c-aab743110b97-kube-api-access-sphlt\") pod \"062bbcea-d872-49af-8a7c-aab743110b97\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.070733 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-config\") pod \"062bbcea-d872-49af-8a7c-aab743110b97\" (UID: \"062bbcea-d872-49af-8a7c-aab743110b97\") " Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.070784 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffe5179-7521-48ff-95ab-a06b983b2b25-config\") pod \"1ffe5179-7521-48ff-95ab-a06b983b2b25\" (UID: \"1ffe5179-7521-48ff-95ab-a06b983b2b25\") " Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.071182 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ffe5179-7521-48ff-95ab-a06b983b2b25-client-ca" (OuterVolumeSpecName: "client-ca") pod "1ffe5179-7521-48ff-95ab-a06b983b2b25" (UID: "1ffe5179-7521-48ff-95ab-a06b983b2b25"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.072023 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "062bbcea-d872-49af-8a7c-aab743110b97" (UID: "062bbcea-d872-49af-8a7c-aab743110b97"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.072262 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-client-ca" (OuterVolumeSpecName: "client-ca") pod "062bbcea-d872-49af-8a7c-aab743110b97" (UID: "062bbcea-d872-49af-8a7c-aab743110b97"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.072839 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ffe5179-7521-48ff-95ab-a06b983b2b25-config" (OuterVolumeSpecName: "config") pod "1ffe5179-7521-48ff-95ab-a06b983b2b25" (UID: "1ffe5179-7521-48ff-95ab-a06b983b2b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.072838 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-config" (OuterVolumeSpecName: "config") pod "062bbcea-d872-49af-8a7c-aab743110b97" (UID: "062bbcea-d872-49af-8a7c-aab743110b97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.084240 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ffe5179-7521-48ff-95ab-a06b983b2b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1ffe5179-7521-48ff-95ab-a06b983b2b25" (UID: "1ffe5179-7521-48ff-95ab-a06b983b2b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.084257 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062bbcea-d872-49af-8a7c-aab743110b97-kube-api-access-sphlt" (OuterVolumeSpecName: "kube-api-access-sphlt") pod "062bbcea-d872-49af-8a7c-aab743110b97" (UID: "062bbcea-d872-49af-8a7c-aab743110b97"). InnerVolumeSpecName "kube-api-access-sphlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.084371 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/062bbcea-d872-49af-8a7c-aab743110b97-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "062bbcea-d872-49af-8a7c-aab743110b97" (UID: "062bbcea-d872-49af-8a7c-aab743110b97"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.084721 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ffe5179-7521-48ff-95ab-a06b983b2b25-kube-api-access-9n5ff" (OuterVolumeSpecName: "kube-api-access-9n5ff") pod "1ffe5179-7521-48ff-95ab-a06b983b2b25" (UID: "1ffe5179-7521-48ff-95ab-a06b983b2b25"). InnerVolumeSpecName "kube-api-access-9n5ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.172582 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ffe5179-7521-48ff-95ab-a06b983b2b25-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.172616 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ffe5179-7521-48ff-95ab-a06b983b2b25-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.172629 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/062bbcea-d872-49af-8a7c-aab743110b97-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.172641 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.172651 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n5ff\" (UniqueName: \"kubernetes.io/projected/1ffe5179-7521-48ff-95ab-a06b983b2b25-kube-api-access-9n5ff\") on node \"crc\" DevicePath \"\"" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.172660 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.172668 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sphlt\" (UniqueName: \"kubernetes.io/projected/062bbcea-d872-49af-8a7c-aab743110b97-kube-api-access-sphlt\") on node \"crc\" DevicePath \"\"" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.172677 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ffe5179-7521-48ff-95ab-a06b983b2b25-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.172685 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062bbcea-d872-49af-8a7c-aab743110b97-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.364921 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lfq75"] Feb 17 08:46:16 crc kubenswrapper[4813]: E0217 08:46:16.365247 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062bbcea-d872-49af-8a7c-aab743110b97" containerName="controller-manager" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.365273 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="062bbcea-d872-49af-8a7c-aab743110b97" containerName="controller-manager" Feb 17 08:46:16 crc kubenswrapper[4813]: E0217 08:46:16.365302 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ffe5179-7521-48ff-95ab-a06b983b2b25" containerName="route-controller-manager" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.365360 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ffe5179-7521-48ff-95ab-a06b983b2b25" containerName="route-controller-manager" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.365668 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ffe5179-7521-48ff-95ab-a06b983b2b25" containerName="route-controller-manager" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.365707 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="062bbcea-d872-49af-8a7c-aab743110b97" containerName="controller-manager" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.367028 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfq75" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.370158 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lfq75"] Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.370509 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.477305 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed4df304-511d-49cc-a151-68139db654e0-catalog-content\") pod \"certified-operators-lfq75\" (UID: \"ed4df304-511d-49cc-a151-68139db654e0\") " pod="openshift-marketplace/certified-operators-lfq75" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.477445 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed4df304-511d-49cc-a151-68139db654e0-utilities\") pod \"certified-operators-lfq75\" (UID: \"ed4df304-511d-49cc-a151-68139db654e0\") " pod="openshift-marketplace/certified-operators-lfq75" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.477524 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndpmt\" (UniqueName: \"kubernetes.io/projected/ed4df304-511d-49cc-a151-68139db654e0-kube-api-access-ndpmt\") pod \"certified-operators-lfq75\" (UID: \"ed4df304-511d-49cc-a151-68139db654e0\") " pod="openshift-marketplace/certified-operators-lfq75" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.560544 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lzjh7"] Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.561932 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzjh7" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.568946 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.572711 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzjh7"] Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.578724 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed4df304-511d-49cc-a151-68139db654e0-utilities\") pod \"certified-operators-lfq75\" (UID: \"ed4df304-511d-49cc-a151-68139db654e0\") " pod="openshift-marketplace/certified-operators-lfq75" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.578856 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndpmt\" (UniqueName: \"kubernetes.io/projected/ed4df304-511d-49cc-a151-68139db654e0-kube-api-access-ndpmt\") pod \"certified-operators-lfq75\" (UID: \"ed4df304-511d-49cc-a151-68139db654e0\") " pod="openshift-marketplace/certified-operators-lfq75" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.578958 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed4df304-511d-49cc-a151-68139db654e0-catalog-content\") pod \"certified-operators-lfq75\" (UID: \"ed4df304-511d-49cc-a151-68139db654e0\") " pod="openshift-marketplace/certified-operators-lfq75" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.579431 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed4df304-511d-49cc-a151-68139db654e0-utilities\") pod \"certified-operators-lfq75\" (UID: \"ed4df304-511d-49cc-a151-68139db654e0\") " pod="openshift-marketplace/certified-operators-lfq75" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.579476 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed4df304-511d-49cc-a151-68139db654e0-catalog-content\") pod \"certified-operators-lfq75\" (UID: \"ed4df304-511d-49cc-a151-68139db654e0\") " pod="openshift-marketplace/certified-operators-lfq75" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.611489 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndpmt\" (UniqueName: \"kubernetes.io/projected/ed4df304-511d-49cc-a151-68139db654e0-kube-api-access-ndpmt\") pod \"certified-operators-lfq75\" (UID: \"ed4df304-511d-49cc-a151-68139db654e0\") " pod="openshift-marketplace/certified-operators-lfq75" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.680621 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787772d1-3a83-410d-825e-c63219fb80ec-catalog-content\") pod \"redhat-marketplace-lzjh7\" (UID: \"787772d1-3a83-410d-825e-c63219fb80ec\") " pod="openshift-marketplace/redhat-marketplace-lzjh7" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.681030 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twnz7\" (UniqueName: \"kubernetes.io/projected/787772d1-3a83-410d-825e-c63219fb80ec-kube-api-access-twnz7\") pod \"redhat-marketplace-lzjh7\" (UID: \"787772d1-3a83-410d-825e-c63219fb80ec\") " pod="openshift-marketplace/redhat-marketplace-lzjh7" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.681079 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787772d1-3a83-410d-825e-c63219fb80ec-utilities\") pod \"redhat-marketplace-lzjh7\" (UID: \"787772d1-3a83-410d-825e-c63219fb80ec\") " pod="openshift-marketplace/redhat-marketplace-lzjh7" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.726059 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfq75" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.781790 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twnz7\" (UniqueName: \"kubernetes.io/projected/787772d1-3a83-410d-825e-c63219fb80ec-kube-api-access-twnz7\") pod \"redhat-marketplace-lzjh7\" (UID: \"787772d1-3a83-410d-825e-c63219fb80ec\") " pod="openshift-marketplace/redhat-marketplace-lzjh7" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.781862 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787772d1-3a83-410d-825e-c63219fb80ec-utilities\") pod \"redhat-marketplace-lzjh7\" (UID: \"787772d1-3a83-410d-825e-c63219fb80ec\") " pod="openshift-marketplace/redhat-marketplace-lzjh7" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.781967 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787772d1-3a83-410d-825e-c63219fb80ec-catalog-content\") pod \"redhat-marketplace-lzjh7\" (UID: \"787772d1-3a83-410d-825e-c63219fb80ec\") " pod="openshift-marketplace/redhat-marketplace-lzjh7" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.782778 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787772d1-3a83-410d-825e-c63219fb80ec-catalog-content\") pod \"redhat-marketplace-lzjh7\" (UID: \"787772d1-3a83-410d-825e-c63219fb80ec\") " pod="openshift-marketplace/redhat-marketplace-lzjh7" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.783063 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787772d1-3a83-410d-825e-c63219fb80ec-utilities\") pod \"redhat-marketplace-lzjh7\" (UID: \"787772d1-3a83-410d-825e-c63219fb80ec\") " pod="openshift-marketplace/redhat-marketplace-lzjh7" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.805240 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twnz7\" (UniqueName: \"kubernetes.io/projected/787772d1-3a83-410d-825e-c63219fb80ec-kube-api-access-twnz7\") pod \"redhat-marketplace-lzjh7\" (UID: \"787772d1-3a83-410d-825e-c63219fb80ec\") " pod="openshift-marketplace/redhat-marketplace-lzjh7" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.807748 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" event={"ID":"1ffe5179-7521-48ff-95ab-a06b983b2b25","Type":"ContainerDied","Data":"d4fc5d4715291e0692a0bfe68be3b245411379f40f82c6b567f86d5cfa04c880"} Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.807802 4813 scope.go:117] "RemoveContainer" containerID="573adc47f31e88c3c8d16ccbc56a645ead938c396c0d67aca1b47087158920f0" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.807912 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.811189 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" event={"ID":"062bbcea-d872-49af-8a7c-aab743110b97","Type":"ContainerDied","Data":"4c46f9bac87f90e4f7f4dfe477853157d8b9dae00968ab98b7e5abf3dfa87453"} Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.811610 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d6d847d76-r7qjv" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.850812 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx"] Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.856843 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcf8b4c5d-przsx"] Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.861706 4813 scope.go:117] "RemoveContainer" containerID="76cc743e5f438a6f91e2591e8fdccded0848a5c1f31bdde60f7029ebc0d39008" Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.877576 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d6d847d76-r7qjv"] Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.882487 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7d6d847d76-r7qjv"] Feb 17 08:46:16 crc kubenswrapper[4813]: I0217 08:46:16.905365 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzjh7" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.023644 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lfq75"] Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.122467 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="062bbcea-d872-49af-8a7c-aab743110b97" path="/var/lib/kubelet/pods/062bbcea-d872-49af-8a7c-aab743110b97/volumes" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.123164 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ffe5179-7521-48ff-95ab-a06b983b2b25" path="/var/lib/kubelet/pods/1ffe5179-7521-48ff-95ab-a06b983b2b25/volumes" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.123919 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzjh7"] Feb 17 08:46:17 crc kubenswrapper[4813]: W0217 08:46:17.128018 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod787772d1_3a83_410d_825e_c63219fb80ec.slice/crio-0cf8e0cf182655c392ba13c637c5211bf91778c965629f283ee0cca166844ab9 WatchSource:0}: Error finding container 0cf8e0cf182655c392ba13c637c5211bf91778c965629f283ee0cca166844ab9: Status 404 returned error can't find the container with id 0cf8e0cf182655c392ba13c637c5211bf91778c965629f283ee0cca166844ab9 Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.590207 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77"] Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.591736 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.595224 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.595256 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.596628 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.597158 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.597472 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.598960 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.602668 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-754b797845-ncggt"] Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.603947 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.610843 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77"] Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.611671 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.612489 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.613084 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.614129 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.614466 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.614933 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.623893 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-754b797845-ncggt"] Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.626386 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.698197 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x9vc\" (UniqueName: \"kubernetes.io/projected/64180133-9110-4a06-ba50-e459c6b8f7b0-kube-api-access-9x9vc\") pod \"controller-manager-754b797845-ncggt\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.698286 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-config\") pod \"route-controller-manager-7c4d96c475-2pr77\" (UID: \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\") " pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.698334 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-client-ca\") pod \"controller-manager-754b797845-ncggt\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.698374 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-serving-cert\") pod \"route-controller-manager-7c4d96c475-2pr77\" (UID: \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\") " pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.698399 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-proxy-ca-bundles\") pod \"controller-manager-754b797845-ncggt\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.698467 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-client-ca\") pod \"route-controller-manager-7c4d96c475-2pr77\" (UID: \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\") " pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.698494 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-config\") pod \"controller-manager-754b797845-ncggt\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.698524 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64180133-9110-4a06-ba50-e459c6b8f7b0-serving-cert\") pod \"controller-manager-754b797845-ncggt\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.698556 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jchg5\" (UniqueName: \"kubernetes.io/projected/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-kube-api-access-jchg5\") pod \"route-controller-manager-7c4d96c475-2pr77\" (UID: \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\") " pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.800090 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-client-ca\") pod \"route-controller-manager-7c4d96c475-2pr77\" (UID: \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\") " pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.800152 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-config\") pod \"controller-manager-754b797845-ncggt\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.800184 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64180133-9110-4a06-ba50-e459c6b8f7b0-serving-cert\") pod \"controller-manager-754b797845-ncggt\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.800217 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jchg5\" (UniqueName: \"kubernetes.io/projected/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-kube-api-access-jchg5\") pod \"route-controller-manager-7c4d96c475-2pr77\" (UID: \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\") " pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.801489 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x9vc\" (UniqueName: \"kubernetes.io/projected/64180133-9110-4a06-ba50-e459c6b8f7b0-kube-api-access-9x9vc\") pod \"controller-manager-754b797845-ncggt\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.801547 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-config\") pod \"route-controller-manager-7c4d96c475-2pr77\" (UID: \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\") " pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.801573 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-client-ca\") pod \"controller-manager-754b797845-ncggt\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.801604 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-serving-cert\") pod \"route-controller-manager-7c4d96c475-2pr77\" (UID: \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\") " pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.801680 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-proxy-ca-bundles\") pod \"controller-manager-754b797845-ncggt\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.802793 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-client-ca\") pod \"route-controller-manager-7c4d96c475-2pr77\" (UID: \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\") " pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.803092 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-config\") pod \"route-controller-manager-7c4d96c475-2pr77\" (UID: \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\") " pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.803465 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-proxy-ca-bundles\") pod \"controller-manager-754b797845-ncggt\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.803652 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-client-ca\") pod \"controller-manager-754b797845-ncggt\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.808780 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-serving-cert\") pod \"route-controller-manager-7c4d96c475-2pr77\" (UID: \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\") " pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.810342 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64180133-9110-4a06-ba50-e459c6b8f7b0-serving-cert\") pod \"controller-manager-754b797845-ncggt\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.810550 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-config\") pod \"controller-manager-754b797845-ncggt\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.822078 4813 generic.go:334] "Generic (PLEG): container finished" podID="ed4df304-511d-49cc-a151-68139db654e0" containerID="491b06fc219fa2b437b3a1b78f92c67c785cd8463f1d7edc937d59fc0378db18" exitCode=0 Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.822150 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfq75" event={"ID":"ed4df304-511d-49cc-a151-68139db654e0","Type":"ContainerDied","Data":"491b06fc219fa2b437b3a1b78f92c67c785cd8463f1d7edc937d59fc0378db18"} Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.822177 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfq75" event={"ID":"ed4df304-511d-49cc-a151-68139db654e0","Type":"ContainerStarted","Data":"617235ccaea46b757c743762fff69a264e638f016a9d60ec75c400f4803f5f73"} Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.826701 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jchg5\" (UniqueName: \"kubernetes.io/projected/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-kube-api-access-jchg5\") pod \"route-controller-manager-7c4d96c475-2pr77\" (UID: \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\") " pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.827519 4813 generic.go:334] "Generic (PLEG): container finished" podID="787772d1-3a83-410d-825e-c63219fb80ec" containerID="9003252c78ddbc4d44cfd4ea56367b17a26320cb6541e8d848734d4b2a975cc1" exitCode=0 Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.827650 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjh7" event={"ID":"787772d1-3a83-410d-825e-c63219fb80ec","Type":"ContainerDied","Data":"9003252c78ddbc4d44cfd4ea56367b17a26320cb6541e8d848734d4b2a975cc1"} Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.827697 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjh7" event={"ID":"787772d1-3a83-410d-825e-c63219fb80ec","Type":"ContainerStarted","Data":"0cf8e0cf182655c392ba13c637c5211bf91778c965629f283ee0cca166844ab9"} Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.843371 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x9vc\" (UniqueName: \"kubernetes.io/projected/64180133-9110-4a06-ba50-e459c6b8f7b0-kube-api-access-9x9vc\") pod \"controller-manager-754b797845-ncggt\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.951121 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:17 crc kubenswrapper[4813]: I0217 08:46:17.964108 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.417624 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-754b797845-ncggt"] Feb 17 08:46:18 crc kubenswrapper[4813]: W0217 08:46:18.421456 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64180133_9110_4a06_ba50_e459c6b8f7b0.slice/crio-d12267b5e8e4523c7c7fc4dbef832dba111ec479847f161c9bde4987b102f802 WatchSource:0}: Error finding container d12267b5e8e4523c7c7fc4dbef832dba111ec479847f161c9bde4987b102f802: Status 404 returned error can't find the container with id d12267b5e8e4523c7c7fc4dbef832dba111ec479847f161c9bde4987b102f802 Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.474220 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77"] Feb 17 08:46:18 crc kubenswrapper[4813]: W0217 08:46:18.487005 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cfb7c66_d356_470b_a64a_b2f8d923d5b2.slice/crio-e72d5e595564424cf5b1d2c51ab2052d82c522fe90564411a3e58c1c62c7fc41 WatchSource:0}: Error finding container e72d5e595564424cf5b1d2c51ab2052d82c522fe90564411a3e58c1c62c7fc41: Status 404 returned error can't find the container with id e72d5e595564424cf5b1d2c51ab2052d82c522fe90564411a3e58c1c62c7fc41 Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.760905 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q2wwb"] Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.762101 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2wwb" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.765379 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.778124 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2wwb"] Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.842691 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zw2d\" (UniqueName: \"kubernetes.io/projected/7a3ec35f-1008-41ec-842d-7d381d01ef12-kube-api-access-7zw2d\") pod \"redhat-operators-q2wwb\" (UID: \"7a3ec35f-1008-41ec-842d-7d381d01ef12\") " pod="openshift-marketplace/redhat-operators-q2wwb" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.842731 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a3ec35f-1008-41ec-842d-7d381d01ef12-catalog-content\") pod \"redhat-operators-q2wwb\" (UID: \"7a3ec35f-1008-41ec-842d-7d381d01ef12\") " pod="openshift-marketplace/redhat-operators-q2wwb" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.842756 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a3ec35f-1008-41ec-842d-7d381d01ef12-utilities\") pod \"redhat-operators-q2wwb\" (UID: \"7a3ec35f-1008-41ec-842d-7d381d01ef12\") " pod="openshift-marketplace/redhat-operators-q2wwb" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.859262 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfq75" event={"ID":"ed4df304-511d-49cc-a151-68139db654e0","Type":"ContainerStarted","Data":"b3877ea0f448911dac56aae123d5c62f7ef513d4965cd8a3ee1cce5d32326afa"} Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.861267 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-754b797845-ncggt" event={"ID":"64180133-9110-4a06-ba50-e459c6b8f7b0","Type":"ContainerStarted","Data":"6dc64b2018068968b22ff68797017dd7632663810dedd409c4589ff934949126"} Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.861286 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-754b797845-ncggt" event={"ID":"64180133-9110-4a06-ba50-e459c6b8f7b0","Type":"ContainerStarted","Data":"d12267b5e8e4523c7c7fc4dbef832dba111ec479847f161c9bde4987b102f802"} Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.861834 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.864438 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" event={"ID":"1cfb7c66-d356-470b-a64a-b2f8d923d5b2","Type":"ContainerStarted","Data":"48f6577f7c545e85126285f7286193c86080f16692403745641e437a6f5b8b85"} Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.864456 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" event={"ID":"1cfb7c66-d356-470b-a64a-b2f8d923d5b2","Type":"ContainerStarted","Data":"e72d5e595564424cf5b1d2c51ab2052d82c522fe90564411a3e58c1c62c7fc41"} Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.864882 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.869596 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.876181 4813 generic.go:334] "Generic (PLEG): container finished" podID="787772d1-3a83-410d-825e-c63219fb80ec" containerID="c1a7a77f4111b00d8c1ddd773d33771861e9b45f336f5a7a2d2fe254125d7654" exitCode=0 Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.876215 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjh7" event={"ID":"787772d1-3a83-410d-825e-c63219fb80ec","Type":"ContainerDied","Data":"c1a7a77f4111b00d8c1ddd773d33771861e9b45f336f5a7a2d2fe254125d7654"} Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.928934 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-754b797845-ncggt" podStartSLOduration=3.928917435 podStartE2EDuration="3.928917435s" podCreationTimestamp="2026-02-17 08:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:46:18.896811309 +0000 UTC m=+326.557572532" watchObservedRunningTime="2026-02-17 08:46:18.928917435 +0000 UTC m=+326.589678658" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.946175 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zw2d\" (UniqueName: \"kubernetes.io/projected/7a3ec35f-1008-41ec-842d-7d381d01ef12-kube-api-access-7zw2d\") pod \"redhat-operators-q2wwb\" (UID: \"7a3ec35f-1008-41ec-842d-7d381d01ef12\") " pod="openshift-marketplace/redhat-operators-q2wwb" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.946222 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a3ec35f-1008-41ec-842d-7d381d01ef12-catalog-content\") pod \"redhat-operators-q2wwb\" (UID: \"7a3ec35f-1008-41ec-842d-7d381d01ef12\") " pod="openshift-marketplace/redhat-operators-q2wwb" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.946259 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a3ec35f-1008-41ec-842d-7d381d01ef12-utilities\") pod \"redhat-operators-q2wwb\" (UID: \"7a3ec35f-1008-41ec-842d-7d381d01ef12\") " pod="openshift-marketplace/redhat-operators-q2wwb" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.946782 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a3ec35f-1008-41ec-842d-7d381d01ef12-utilities\") pod \"redhat-operators-q2wwb\" (UID: \"7a3ec35f-1008-41ec-842d-7d381d01ef12\") " pod="openshift-marketplace/redhat-operators-q2wwb" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.946846 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a3ec35f-1008-41ec-842d-7d381d01ef12-catalog-content\") pod \"redhat-operators-q2wwb\" (UID: \"7a3ec35f-1008-41ec-842d-7d381d01ef12\") " pod="openshift-marketplace/redhat-operators-q2wwb" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.945377 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" podStartSLOduration=3.945361763 podStartE2EDuration="3.945361763s" podCreationTimestamp="2026-02-17 08:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:46:18.94416896 +0000 UTC m=+326.604930193" watchObservedRunningTime="2026-02-17 08:46:18.945361763 +0000 UTC m=+326.606122996" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.961669 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hklb7"] Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.963884 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hklb7" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.972299 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.981176 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hklb7"] Feb 17 08:46:18 crc kubenswrapper[4813]: I0217 08:46:18.984743 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zw2d\" (UniqueName: \"kubernetes.io/projected/7a3ec35f-1008-41ec-842d-7d381d01ef12-kube-api-access-7zw2d\") pod \"redhat-operators-q2wwb\" (UID: \"7a3ec35f-1008-41ec-842d-7d381d01ef12\") " pod="openshift-marketplace/redhat-operators-q2wwb" Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.047748 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81422ea-63bb-4de1-b63f-7f3c9b26b5c7-utilities\") pod \"community-operators-hklb7\" (UID: \"e81422ea-63bb-4de1-b63f-7f3c9b26b5c7\") " pod="openshift-marketplace/community-operators-hklb7" Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.047812 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqpzk\" (UniqueName: \"kubernetes.io/projected/e81422ea-63bb-4de1-b63f-7f3c9b26b5c7-kube-api-access-hqpzk\") pod \"community-operators-hklb7\" (UID: \"e81422ea-63bb-4de1-b63f-7f3c9b26b5c7\") " pod="openshift-marketplace/community-operators-hklb7" Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.047838 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81422ea-63bb-4de1-b63f-7f3c9b26b5c7-catalog-content\") pod \"community-operators-hklb7\" (UID: \"e81422ea-63bb-4de1-b63f-7f3c9b26b5c7\") " pod="openshift-marketplace/community-operators-hklb7" Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.149365 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81422ea-63bb-4de1-b63f-7f3c9b26b5c7-utilities\") pod \"community-operators-hklb7\" (UID: \"e81422ea-63bb-4de1-b63f-7f3c9b26b5c7\") " pod="openshift-marketplace/community-operators-hklb7" Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.149467 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqpzk\" (UniqueName: \"kubernetes.io/projected/e81422ea-63bb-4de1-b63f-7f3c9b26b5c7-kube-api-access-hqpzk\") pod \"community-operators-hklb7\" (UID: \"e81422ea-63bb-4de1-b63f-7f3c9b26b5c7\") " pod="openshift-marketplace/community-operators-hklb7" Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.149505 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81422ea-63bb-4de1-b63f-7f3c9b26b5c7-catalog-content\") pod \"community-operators-hklb7\" (UID: \"e81422ea-63bb-4de1-b63f-7f3c9b26b5c7\") " pod="openshift-marketplace/community-operators-hklb7" Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.150026 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81422ea-63bb-4de1-b63f-7f3c9b26b5c7-catalog-content\") pod \"community-operators-hklb7\" (UID: \"e81422ea-63bb-4de1-b63f-7f3c9b26b5c7\") " pod="openshift-marketplace/community-operators-hklb7" Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.150344 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81422ea-63bb-4de1-b63f-7f3c9b26b5c7-utilities\") pod \"community-operators-hklb7\" (UID: \"e81422ea-63bb-4de1-b63f-7f3c9b26b5c7\") " pod="openshift-marketplace/community-operators-hklb7" Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.172358 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqpzk\" (UniqueName: \"kubernetes.io/projected/e81422ea-63bb-4de1-b63f-7f3c9b26b5c7-kube-api-access-hqpzk\") pod \"community-operators-hklb7\" (UID: \"e81422ea-63bb-4de1-b63f-7f3c9b26b5c7\") " pod="openshift-marketplace/community-operators-hklb7" Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.212467 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.234240 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2wwb" Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.284846 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hklb7" Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.528610 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hklb7"] Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.701332 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2wwb"] Feb 17 08:46:19 crc kubenswrapper[4813]: W0217 08:46:19.744938 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a3ec35f_1008_41ec_842d_7d381d01ef12.slice/crio-8866adf5dc55efc9d5a9caa85cbc5bbff1f00b1949c98fd44b5b5d8fa77f21d1 WatchSource:0}: Error finding container 8866adf5dc55efc9d5a9caa85cbc5bbff1f00b1949c98fd44b5b5d8fa77f21d1: Status 404 returned error can't find the container with id 8866adf5dc55efc9d5a9caa85cbc5bbff1f00b1949c98fd44b5b5d8fa77f21d1 Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.882822 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjh7" event={"ID":"787772d1-3a83-410d-825e-c63219fb80ec","Type":"ContainerStarted","Data":"cb0ae0b608f8fbea0a2d1e0192ecb98558b0c7370d524295ce5fed7952089846"} Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.885860 4813 generic.go:334] "Generic (PLEG): container finished" podID="ed4df304-511d-49cc-a151-68139db654e0" containerID="b3877ea0f448911dac56aae123d5c62f7ef513d4965cd8a3ee1cce5d32326afa" exitCode=0 Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.885935 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfq75" event={"ID":"ed4df304-511d-49cc-a151-68139db654e0","Type":"ContainerDied","Data":"b3877ea0f448911dac56aae123d5c62f7ef513d4965cd8a3ee1cce5d32326afa"} Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.885965 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfq75" event={"ID":"ed4df304-511d-49cc-a151-68139db654e0","Type":"ContainerStarted","Data":"250f7f60308e5ee29719404b67628d69c986f0b86b406036fcc96fece898a7ba"} Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.888584 4813 generic.go:334] "Generic (PLEG): container finished" podID="e81422ea-63bb-4de1-b63f-7f3c9b26b5c7" containerID="664c498156fa3143669365d9f3c7e3eb72cb40b9305cd61cd4baa593d5bc7ff0" exitCode=0 Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.888635 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hklb7" event={"ID":"e81422ea-63bb-4de1-b63f-7f3c9b26b5c7","Type":"ContainerDied","Data":"664c498156fa3143669365d9f3c7e3eb72cb40b9305cd61cd4baa593d5bc7ff0"} Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.888653 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hklb7" event={"ID":"e81422ea-63bb-4de1-b63f-7f3c9b26b5c7","Type":"ContainerStarted","Data":"da651e7adab0fbe54081f3e66893911c6433f9b6e21c16f392b6ea2e9eb4f52b"} Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.891009 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2wwb" event={"ID":"7a3ec35f-1008-41ec-842d-7d381d01ef12","Type":"ContainerStarted","Data":"8d9c8a88b947de0ec5b37175537860c682838fa550206ff653314904988391d3"} Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.891041 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2wwb" event={"ID":"7a3ec35f-1008-41ec-842d-7d381d01ef12","Type":"ContainerStarted","Data":"8866adf5dc55efc9d5a9caa85cbc5bbff1f00b1949c98fd44b5b5d8fa77f21d1"} Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.902557 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lzjh7" podStartSLOduration=2.434203337 podStartE2EDuration="3.902539525s" podCreationTimestamp="2026-02-17 08:46:16 +0000 UTC" firstStartedPulling="2026-02-17 08:46:17.835637781 +0000 UTC m=+325.496399014" lastFinishedPulling="2026-02-17 08:46:19.303973979 +0000 UTC m=+326.964735202" observedRunningTime="2026-02-17 08:46:19.90201406 +0000 UTC m=+327.562775283" watchObservedRunningTime="2026-02-17 08:46:19.902539525 +0000 UTC m=+327.563300768" Feb 17 08:46:19 crc kubenswrapper[4813]: I0217 08:46:19.951342 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lfq75" podStartSLOduration=2.1602914 podStartE2EDuration="3.951300304s" podCreationTimestamp="2026-02-17 08:46:16 +0000 UTC" firstStartedPulling="2026-02-17 08:46:17.824190119 +0000 UTC m=+325.484951352" lastFinishedPulling="2026-02-17 08:46:19.615199033 +0000 UTC m=+327.275960256" observedRunningTime="2026-02-17 08:46:19.950358578 +0000 UTC m=+327.611119801" watchObservedRunningTime="2026-02-17 08:46:19.951300304 +0000 UTC m=+327.612061537" Feb 17 08:46:20 crc kubenswrapper[4813]: I0217 08:46:20.899618 4813 generic.go:334] "Generic (PLEG): container finished" podID="e81422ea-63bb-4de1-b63f-7f3c9b26b5c7" containerID="e13f11c875cecc995771dcbf73dffadb841a20bedabd32bba0df036eaa96673a" exitCode=0 Feb 17 08:46:20 crc kubenswrapper[4813]: I0217 08:46:20.899687 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hklb7" event={"ID":"e81422ea-63bb-4de1-b63f-7f3c9b26b5c7","Type":"ContainerDied","Data":"e13f11c875cecc995771dcbf73dffadb841a20bedabd32bba0df036eaa96673a"} Feb 17 08:46:20 crc kubenswrapper[4813]: I0217 08:46:20.902857 4813 generic.go:334] "Generic (PLEG): container finished" podID="7a3ec35f-1008-41ec-842d-7d381d01ef12" containerID="8d9c8a88b947de0ec5b37175537860c682838fa550206ff653314904988391d3" exitCode=0 Feb 17 08:46:20 crc kubenswrapper[4813]: I0217 08:46:20.902949 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2wwb" event={"ID":"7a3ec35f-1008-41ec-842d-7d381d01ef12","Type":"ContainerDied","Data":"8d9c8a88b947de0ec5b37175537860c682838fa550206ff653314904988391d3"} Feb 17 08:46:21 crc kubenswrapper[4813]: I0217 08:46:21.911148 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2wwb" event={"ID":"7a3ec35f-1008-41ec-842d-7d381d01ef12","Type":"ContainerStarted","Data":"bb39ac282230929ce19e15da559506e56538cda25adc3279a0a094e59293cc20"} Feb 17 08:46:21 crc kubenswrapper[4813]: I0217 08:46:21.913513 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hklb7" event={"ID":"e81422ea-63bb-4de1-b63f-7f3c9b26b5c7","Type":"ContainerStarted","Data":"168e75c37b61b5aa957ae60dd2e09ca9f13c02874528b0212027af9874695494"} Feb 17 08:46:21 crc kubenswrapper[4813]: I0217 08:46:21.959579 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hklb7" podStartSLOduration=2.524214542 podStartE2EDuration="3.95956414s" podCreationTimestamp="2026-02-17 08:46:18 +0000 UTC" firstStartedPulling="2026-02-17 08:46:19.890090635 +0000 UTC m=+327.550851868" lastFinishedPulling="2026-02-17 08:46:21.325440243 +0000 UTC m=+328.986201466" observedRunningTime="2026-02-17 08:46:21.958798509 +0000 UTC m=+329.619559732" watchObservedRunningTime="2026-02-17 08:46:21.95956414 +0000 UTC m=+329.620325363" Feb 17 08:46:22 crc kubenswrapper[4813]: I0217 08:46:22.922809 4813 generic.go:334] "Generic (PLEG): container finished" podID="7a3ec35f-1008-41ec-842d-7d381d01ef12" containerID="bb39ac282230929ce19e15da559506e56538cda25adc3279a0a094e59293cc20" exitCode=0 Feb 17 08:46:22 crc kubenswrapper[4813]: I0217 08:46:22.922861 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2wwb" event={"ID":"7a3ec35f-1008-41ec-842d-7d381d01ef12","Type":"ContainerDied","Data":"bb39ac282230929ce19e15da559506e56538cda25adc3279a0a094e59293cc20"} Feb 17 08:46:23 crc kubenswrapper[4813]: I0217 08:46:23.930619 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2wwb" event={"ID":"7a3ec35f-1008-41ec-842d-7d381d01ef12","Type":"ContainerStarted","Data":"1e33d75aa75b3ab29492f929a90c67bcdee51bbf7e0ed7d160c69c5dc27f6910"} Feb 17 08:46:23 crc kubenswrapper[4813]: I0217 08:46:23.957105 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q2wwb" podStartSLOduration=3.556578575 podStartE2EDuration="5.957084012s" podCreationTimestamp="2026-02-17 08:46:18 +0000 UTC" firstStartedPulling="2026-02-17 08:46:20.904354635 +0000 UTC m=+328.565115868" lastFinishedPulling="2026-02-17 08:46:23.304860082 +0000 UTC m=+330.965621305" observedRunningTime="2026-02-17 08:46:23.952576179 +0000 UTC m=+331.613337402" watchObservedRunningTime="2026-02-17 08:46:23.957084012 +0000 UTC m=+331.617845245" Feb 17 08:46:26 crc kubenswrapper[4813]: I0217 08:46:26.726971 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lfq75" Feb 17 08:46:26 crc kubenswrapper[4813]: I0217 08:46:26.727292 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lfq75" Feb 17 08:46:26 crc kubenswrapper[4813]: I0217 08:46:26.810765 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lfq75" Feb 17 08:46:26 crc kubenswrapper[4813]: I0217 08:46:26.906540 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lzjh7" Feb 17 08:46:26 crc kubenswrapper[4813]: I0217 08:46:26.907165 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lzjh7" Feb 17 08:46:26 crc kubenswrapper[4813]: I0217 08:46:26.973353 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lzjh7" Feb 17 08:46:26 crc kubenswrapper[4813]: I0217 08:46:26.988160 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lfq75" Feb 17 08:46:27 crc kubenswrapper[4813]: I0217 08:46:27.019115 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lzjh7" Feb 17 08:46:29 crc kubenswrapper[4813]: I0217 08:46:29.234918 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q2wwb" Feb 17 08:46:29 crc kubenswrapper[4813]: I0217 08:46:29.234986 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q2wwb" Feb 17 08:46:29 crc kubenswrapper[4813]: I0217 08:46:29.285114 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hklb7" Feb 17 08:46:29 crc kubenswrapper[4813]: I0217 08:46:29.285172 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hklb7" Feb 17 08:46:29 crc kubenswrapper[4813]: I0217 08:46:29.332557 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hklb7" Feb 17 08:46:30 crc kubenswrapper[4813]: I0217 08:46:30.011150 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hklb7" Feb 17 08:46:30 crc kubenswrapper[4813]: I0217 08:46:30.298556 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q2wwb" podUID="7a3ec35f-1008-41ec-842d-7d381d01ef12" containerName="registry-server" probeResult="failure" output=< Feb 17 08:46:30 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 17 08:46:30 crc kubenswrapper[4813]: > Feb 17 08:46:35 crc kubenswrapper[4813]: I0217 08:46:35.166163 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:46:35 crc kubenswrapper[4813]: I0217 08:46:35.166299 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:46:39 crc kubenswrapper[4813]: I0217 08:46:39.307802 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q2wwb" Feb 17 08:46:39 crc kubenswrapper[4813]: I0217 08:46:39.382087 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q2wwb" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.543981 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g4vgg"] Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.546066 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.563661 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g4vgg"] Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.693754 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-trusted-ca\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.693850 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-registry-tls\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.693885 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gs68\" (UniqueName: \"kubernetes.io/projected/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-kube-api-access-8gs68\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.693913 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.693935 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.693968 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-registry-certificates\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.693990 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-bound-sa-token\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.694044 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.752228 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.795081 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-trusted-ca\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.795152 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-registry-tls\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.795174 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gs68\" (UniqueName: \"kubernetes.io/projected/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-kube-api-access-8gs68\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.795196 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.795216 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.795236 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-registry-certificates\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.795252 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-bound-sa-token\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.795860 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.796621 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-trusted-ca\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.797572 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-registry-certificates\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.801135 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.801246 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-registry-tls\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.826320 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gs68\" (UniqueName: \"kubernetes.io/projected/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-kube-api-access-8gs68\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.834756 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49293c26-fd06-4d3c-b4e4-9c8c9a0d5805-bound-sa-token\") pod \"image-registry-66df7c8f76-g4vgg\" (UID: \"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805\") " pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:50 crc kubenswrapper[4813]: I0217 08:46:50.878561 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:51 crc kubenswrapper[4813]: I0217 08:46:51.309599 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g4vgg"] Feb 17 08:46:52 crc kubenswrapper[4813]: I0217 08:46:52.128428 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" event={"ID":"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805","Type":"ContainerStarted","Data":"60cbb4a7bbaf531d84d658649ce29661ba0ea4a0b4ef0a59ce2fd734bf37c6bb"} Feb 17 08:46:52 crc kubenswrapper[4813]: I0217 08:46:52.128769 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" event={"ID":"49293c26-fd06-4d3c-b4e4-9c8c9a0d5805","Type":"ContainerStarted","Data":"6927c39017f7deb63c1c7beb8018db89060746a2905651bc647b9e83585cdcf9"} Feb 17 08:46:52 crc kubenswrapper[4813]: I0217 08:46:52.128790 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:46:52 crc kubenswrapper[4813]: I0217 08:46:52.154866 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" podStartSLOduration=2.154847609 podStartE2EDuration="2.154847609s" podCreationTimestamp="2026-02-17 08:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:46:52.151349974 +0000 UTC m=+359.812111197" watchObservedRunningTime="2026-02-17 08:46:52.154847609 +0000 UTC m=+359.815608842" Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.404438 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-754b797845-ncggt"] Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.406168 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-754b797845-ncggt" podUID="64180133-9110-4a06-ba50-e459c6b8f7b0" containerName="controller-manager" containerID="cri-o://6dc64b2018068968b22ff68797017dd7632663810dedd409c4589ff934949126" gracePeriod=30 Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.424997 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77"] Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.425250 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" podUID="1cfb7c66-d356-470b-a64a-b2f8d923d5b2" containerName="route-controller-manager" containerID="cri-o://48f6577f7c545e85126285f7286193c86080f16692403745641e437a6f5b8b85" gracePeriod=30 Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.874836 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.879692 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.945451 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-config\") pod \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\" (UID: \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\") " Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.945492 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-client-ca\") pod \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\" (UID: \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\") " Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.945511 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jchg5\" (UniqueName: \"kubernetes.io/projected/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-kube-api-access-jchg5\") pod \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\" (UID: \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\") " Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.945536 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64180133-9110-4a06-ba50-e459c6b8f7b0-serving-cert\") pod \"64180133-9110-4a06-ba50-e459c6b8f7b0\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.945556 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x9vc\" (UniqueName: \"kubernetes.io/projected/64180133-9110-4a06-ba50-e459c6b8f7b0-kube-api-access-9x9vc\") pod \"64180133-9110-4a06-ba50-e459c6b8f7b0\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.945615 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-proxy-ca-bundles\") pod \"64180133-9110-4a06-ba50-e459c6b8f7b0\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.945645 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-client-ca\") pod \"64180133-9110-4a06-ba50-e459c6b8f7b0\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.945667 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-serving-cert\") pod \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\" (UID: \"1cfb7c66-d356-470b-a64a-b2f8d923d5b2\") " Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.945690 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-config\") pod \"64180133-9110-4a06-ba50-e459c6b8f7b0\" (UID: \"64180133-9110-4a06-ba50-e459c6b8f7b0\") " Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.946339 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-config" (OuterVolumeSpecName: "config") pod "1cfb7c66-d356-470b-a64a-b2f8d923d5b2" (UID: "1cfb7c66-d356-470b-a64a-b2f8d923d5b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.946293 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-client-ca" (OuterVolumeSpecName: "client-ca") pod "1cfb7c66-d356-470b-a64a-b2f8d923d5b2" (UID: "1cfb7c66-d356-470b-a64a-b2f8d923d5b2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.946405 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-config" (OuterVolumeSpecName: "config") pod "64180133-9110-4a06-ba50-e459c6b8f7b0" (UID: "64180133-9110-4a06-ba50-e459c6b8f7b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.946925 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-client-ca" (OuterVolumeSpecName: "client-ca") pod "64180133-9110-4a06-ba50-e459c6b8f7b0" (UID: "64180133-9110-4a06-ba50-e459c6b8f7b0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.947521 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "64180133-9110-4a06-ba50-e459c6b8f7b0" (UID: "64180133-9110-4a06-ba50-e459c6b8f7b0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.951451 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1cfb7c66-d356-470b-a64a-b2f8d923d5b2" (UID: "1cfb7c66-d356-470b-a64a-b2f8d923d5b2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.951476 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64180133-9110-4a06-ba50-e459c6b8f7b0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "64180133-9110-4a06-ba50-e459c6b8f7b0" (UID: "64180133-9110-4a06-ba50-e459c6b8f7b0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.951481 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64180133-9110-4a06-ba50-e459c6b8f7b0-kube-api-access-9x9vc" (OuterVolumeSpecName: "kube-api-access-9x9vc") pod "64180133-9110-4a06-ba50-e459c6b8f7b0" (UID: "64180133-9110-4a06-ba50-e459c6b8f7b0"). InnerVolumeSpecName "kube-api-access-9x9vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:46:59 crc kubenswrapper[4813]: I0217 08:46:59.951565 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-kube-api-access-jchg5" (OuterVolumeSpecName: "kube-api-access-jchg5") pod "1cfb7c66-d356-470b-a64a-b2f8d923d5b2" (UID: "1cfb7c66-d356-470b-a64a-b2f8d923d5b2"). InnerVolumeSpecName "kube-api-access-jchg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.046871 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jchg5\" (UniqueName: \"kubernetes.io/projected/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-kube-api-access-jchg5\") on node \"crc\" DevicePath \"\"" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.046904 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64180133-9110-4a06-ba50-e459c6b8f7b0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.046914 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x9vc\" (UniqueName: \"kubernetes.io/projected/64180133-9110-4a06-ba50-e459c6b8f7b0-kube-api-access-9x9vc\") on node \"crc\" DevicePath \"\"" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.046921 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.046929 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.046937 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.046945 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64180133-9110-4a06-ba50-e459c6b8f7b0-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.046953 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.046961 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1cfb7c66-d356-470b-a64a-b2f8d923d5b2-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.178078 4813 generic.go:334] "Generic (PLEG): container finished" podID="64180133-9110-4a06-ba50-e459c6b8f7b0" containerID="6dc64b2018068968b22ff68797017dd7632663810dedd409c4589ff934949126" exitCode=0 Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.178125 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-754b797845-ncggt" event={"ID":"64180133-9110-4a06-ba50-e459c6b8f7b0","Type":"ContainerDied","Data":"6dc64b2018068968b22ff68797017dd7632663810dedd409c4589ff934949126"} Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.178137 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-754b797845-ncggt" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.178192 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-754b797845-ncggt" event={"ID":"64180133-9110-4a06-ba50-e459c6b8f7b0","Type":"ContainerDied","Data":"d12267b5e8e4523c7c7fc4dbef832dba111ec479847f161c9bde4987b102f802"} Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.178217 4813 scope.go:117] "RemoveContainer" containerID="6dc64b2018068968b22ff68797017dd7632663810dedd409c4589ff934949126" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.180654 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cfb7c66-d356-470b-a64a-b2f8d923d5b2" containerID="48f6577f7c545e85126285f7286193c86080f16692403745641e437a6f5b8b85" exitCode=0 Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.180695 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" event={"ID":"1cfb7c66-d356-470b-a64a-b2f8d923d5b2","Type":"ContainerDied","Data":"48f6577f7c545e85126285f7286193c86080f16692403745641e437a6f5b8b85"} Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.180726 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" event={"ID":"1cfb7c66-d356-470b-a64a-b2f8d923d5b2","Type":"ContainerDied","Data":"e72d5e595564424cf5b1d2c51ab2052d82c522fe90564411a3e58c1c62c7fc41"} Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.180752 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.201225 4813 scope.go:117] "RemoveContainer" containerID="6dc64b2018068968b22ff68797017dd7632663810dedd409c4589ff934949126" Feb 17 08:47:00 crc kubenswrapper[4813]: E0217 08:47:00.202805 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dc64b2018068968b22ff68797017dd7632663810dedd409c4589ff934949126\": container with ID starting with 6dc64b2018068968b22ff68797017dd7632663810dedd409c4589ff934949126 not found: ID does not exist" containerID="6dc64b2018068968b22ff68797017dd7632663810dedd409c4589ff934949126" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.202838 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dc64b2018068968b22ff68797017dd7632663810dedd409c4589ff934949126"} err="failed to get container status \"6dc64b2018068968b22ff68797017dd7632663810dedd409c4589ff934949126\": rpc error: code = NotFound desc = could not find container \"6dc64b2018068968b22ff68797017dd7632663810dedd409c4589ff934949126\": container with ID starting with 6dc64b2018068968b22ff68797017dd7632663810dedd409c4589ff934949126 not found: ID does not exist" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.202860 4813 scope.go:117] "RemoveContainer" containerID="48f6577f7c545e85126285f7286193c86080f16692403745641e437a6f5b8b85" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.209250 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-754b797845-ncggt"] Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.214134 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-754b797845-ncggt"] Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.218269 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77"] Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.221959 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c4d96c475-2pr77"] Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.231460 4813 scope.go:117] "RemoveContainer" containerID="48f6577f7c545e85126285f7286193c86080f16692403745641e437a6f5b8b85" Feb 17 08:47:00 crc kubenswrapper[4813]: E0217 08:47:00.239711 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f6577f7c545e85126285f7286193c86080f16692403745641e437a6f5b8b85\": container with ID starting with 48f6577f7c545e85126285f7286193c86080f16692403745641e437a6f5b8b85 not found: ID does not exist" containerID="48f6577f7c545e85126285f7286193c86080f16692403745641e437a6f5b8b85" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.241273 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f6577f7c545e85126285f7286193c86080f16692403745641e437a6f5b8b85"} err="failed to get container status \"48f6577f7c545e85126285f7286193c86080f16692403745641e437a6f5b8b85\": rpc error: code = NotFound desc = could not find container \"48f6577f7c545e85126285f7286193c86080f16692403745641e437a6f5b8b85\": container with ID starting with 48f6577f7c545e85126285f7286193c86080f16692403745641e437a6f5b8b85 not found: ID does not exist" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.614036 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t"] Feb 17 08:47:00 crc kubenswrapper[4813]: E0217 08:47:00.614393 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cfb7c66-d356-470b-a64a-b2f8d923d5b2" containerName="route-controller-manager" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.614406 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cfb7c66-d356-470b-a64a-b2f8d923d5b2" containerName="route-controller-manager" Feb 17 08:47:00 crc kubenswrapper[4813]: E0217 08:47:00.614416 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64180133-9110-4a06-ba50-e459c6b8f7b0" containerName="controller-manager" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.614424 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="64180133-9110-4a06-ba50-e459c6b8f7b0" containerName="controller-manager" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.614555 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cfb7c66-d356-470b-a64a-b2f8d923d5b2" containerName="route-controller-manager" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.614569 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="64180133-9110-4a06-ba50-e459c6b8f7b0" containerName="controller-manager" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.615106 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.618528 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c8dd5f884-nt76p"] Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.619396 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.619566 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.619998 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.620102 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.620105 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.620286 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.620432 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.621485 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.621791 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.624236 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.625729 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.625970 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.626171 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.628975 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c8dd5f884-nt76p"] Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.642547 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t"] Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.660400 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.762175 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8-client-ca\") pod \"route-controller-manager-64d64f5ff7-6j46t\" (UID: \"81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8\") " pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.762237 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03-config\") pod \"controller-manager-7c8dd5f884-nt76p\" (UID: \"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03\") " pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.762290 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03-client-ca\") pod \"controller-manager-7c8dd5f884-nt76p\" (UID: \"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03\") " pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.762345 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnrsc\" (UniqueName: \"kubernetes.io/projected/a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03-kube-api-access-pnrsc\") pod \"controller-manager-7c8dd5f884-nt76p\" (UID: \"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03\") " pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.762474 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03-proxy-ca-bundles\") pod \"controller-manager-7c8dd5f884-nt76p\" (UID: \"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03\") " pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.762543 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8-config\") pod \"route-controller-manager-64d64f5ff7-6j46t\" (UID: \"81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8\") " pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.762587 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8-serving-cert\") pod \"route-controller-manager-64d64f5ff7-6j46t\" (UID: \"81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8\") " pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.762634 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr7sq\" (UniqueName: \"kubernetes.io/projected/81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8-kube-api-access-xr7sq\") pod \"route-controller-manager-64d64f5ff7-6j46t\" (UID: \"81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8\") " pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.762690 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03-serving-cert\") pod \"controller-manager-7c8dd5f884-nt76p\" (UID: \"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03\") " pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.864362 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8-config\") pod \"route-controller-manager-64d64f5ff7-6j46t\" (UID: \"81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8\") " pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.864434 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8-serving-cert\") pod \"route-controller-manager-64d64f5ff7-6j46t\" (UID: \"81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8\") " pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.864479 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr7sq\" (UniqueName: \"kubernetes.io/projected/81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8-kube-api-access-xr7sq\") pod \"route-controller-manager-64d64f5ff7-6j46t\" (UID: \"81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8\") " pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.864516 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03-serving-cert\") pod \"controller-manager-7c8dd5f884-nt76p\" (UID: \"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03\") " pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.864606 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8-client-ca\") pod \"route-controller-manager-64d64f5ff7-6j46t\" (UID: \"81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8\") " pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.864663 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03-config\") pod \"controller-manager-7c8dd5f884-nt76p\" (UID: \"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03\") " pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.864725 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03-client-ca\") pod \"controller-manager-7c8dd5f884-nt76p\" (UID: \"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03\") " pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.864756 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnrsc\" (UniqueName: \"kubernetes.io/projected/a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03-kube-api-access-pnrsc\") pod \"controller-manager-7c8dd5f884-nt76p\" (UID: \"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03\") " pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.864802 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03-proxy-ca-bundles\") pod \"controller-manager-7c8dd5f884-nt76p\" (UID: \"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03\") " pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.866370 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8-config\") pod \"route-controller-manager-64d64f5ff7-6j46t\" (UID: \"81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8\") " pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.866887 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03-proxy-ca-bundles\") pod \"controller-manager-7c8dd5f884-nt76p\" (UID: \"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03\") " pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.866945 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8-client-ca\") pod \"route-controller-manager-64d64f5ff7-6j46t\" (UID: \"81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8\") " pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.867191 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03-client-ca\") pod \"controller-manager-7c8dd5f884-nt76p\" (UID: \"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03\") " pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.868589 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03-config\") pod \"controller-manager-7c8dd5f884-nt76p\" (UID: \"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03\") " pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.871775 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8-serving-cert\") pod \"route-controller-manager-64d64f5ff7-6j46t\" (UID: \"81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8\") " pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.872523 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03-serving-cert\") pod \"controller-manager-7c8dd5f884-nt76p\" (UID: \"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03\") " pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.896056 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr7sq\" (UniqueName: \"kubernetes.io/projected/81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8-kube-api-access-xr7sq\") pod \"route-controller-manager-64d64f5ff7-6j46t\" (UID: \"81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8\") " pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.898062 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnrsc\" (UniqueName: \"kubernetes.io/projected/a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03-kube-api-access-pnrsc\") pod \"controller-manager-7c8dd5f884-nt76p\" (UID: \"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03\") " pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.950865 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" Feb 17 08:47:00 crc kubenswrapper[4813]: I0217 08:47:00.964947 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:01 crc kubenswrapper[4813]: I0217 08:47:01.127722 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cfb7c66-d356-470b-a64a-b2f8d923d5b2" path="/var/lib/kubelet/pods/1cfb7c66-d356-470b-a64a-b2f8d923d5b2/volumes" Feb 17 08:47:01 crc kubenswrapper[4813]: I0217 08:47:01.128709 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64180133-9110-4a06-ba50-e459c6b8f7b0" path="/var/lib/kubelet/pods/64180133-9110-4a06-ba50-e459c6b8f7b0/volumes" Feb 17 08:47:01 crc kubenswrapper[4813]: I0217 08:47:01.397172 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t"] Feb 17 08:47:01 crc kubenswrapper[4813]: I0217 08:47:01.463291 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c8dd5f884-nt76p"] Feb 17 08:47:01 crc kubenswrapper[4813]: W0217 08:47:01.466623 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9b84c58_fe3d_4c03_b38e_d3f2dfd0ba03.slice/crio-9ad614e00211594e935f003ac6baccb01aa31b5756a22a92366d0db983d4d63f WatchSource:0}: Error finding container 9ad614e00211594e935f003ac6baccb01aa31b5756a22a92366d0db983d4d63f: Status 404 returned error can't find the container with id 9ad614e00211594e935f003ac6baccb01aa31b5756a22a92366d0db983d4d63f Feb 17 08:47:02 crc kubenswrapper[4813]: I0217 08:47:02.195198 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" event={"ID":"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03","Type":"ContainerStarted","Data":"7275f8e77cb3fa05792c3af1d1e795267c5de2fae8a9ee4d2338b3c5fc9ec050"} Feb 17 08:47:02 crc kubenswrapper[4813]: I0217 08:47:02.195508 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" event={"ID":"a9b84c58-fe3d-4c03-b38e-d3f2dfd0ba03","Type":"ContainerStarted","Data":"9ad614e00211594e935f003ac6baccb01aa31b5756a22a92366d0db983d4d63f"} Feb 17 08:47:02 crc kubenswrapper[4813]: I0217 08:47:02.196061 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:02 crc kubenswrapper[4813]: I0217 08:47:02.197375 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" event={"ID":"81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8","Type":"ContainerStarted","Data":"b0da496b40a484fa03c059a09701e12b899c29ce41e8caff241b552d44f43e31"} Feb 17 08:47:02 crc kubenswrapper[4813]: I0217 08:47:02.197399 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" event={"ID":"81ea85fb-b2c2-4b10-8bf6-54dbe741e0c8","Type":"ContainerStarted","Data":"a704248b1c99701a58b0e917cb7303fc25b9b66d05ea26c80adf4b815ea68a39"} Feb 17 08:47:02 crc kubenswrapper[4813]: I0217 08:47:02.198049 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" Feb 17 08:47:02 crc kubenswrapper[4813]: I0217 08:47:02.201909 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" Feb 17 08:47:02 crc kubenswrapper[4813]: I0217 08:47:02.210407 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" Feb 17 08:47:02 crc kubenswrapper[4813]: I0217 08:47:02.221620 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c8dd5f884-nt76p" podStartSLOduration=3.221604738 podStartE2EDuration="3.221604738s" podCreationTimestamp="2026-02-17 08:46:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:47:02.218100823 +0000 UTC m=+369.878862046" watchObservedRunningTime="2026-02-17 08:47:02.221604738 +0000 UTC m=+369.882365961" Feb 17 08:47:02 crc kubenswrapper[4813]: I0217 08:47:02.267270 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64d64f5ff7-6j46t" podStartSLOduration=3.267252763 podStartE2EDuration="3.267252763s" podCreationTimestamp="2026-02-17 08:46:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:47:02.266361088 +0000 UTC m=+369.927122321" watchObservedRunningTime="2026-02-17 08:47:02.267252763 +0000 UTC m=+369.928013986" Feb 17 08:47:05 crc kubenswrapper[4813]: I0217 08:47:05.165550 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:47:05 crc kubenswrapper[4813]: I0217 08:47:05.166252 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:47:10 crc kubenswrapper[4813]: I0217 08:47:10.884656 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-g4vgg" Feb 17 08:47:10 crc kubenswrapper[4813]: I0217 08:47:10.958883 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9pw9r"] Feb 17 08:47:35 crc kubenswrapper[4813]: I0217 08:47:35.165475 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:47:35 crc kubenswrapper[4813]: I0217 08:47:35.166637 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:47:35 crc kubenswrapper[4813]: I0217 08:47:35.166712 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:47:35 crc kubenswrapper[4813]: I0217 08:47:35.167718 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"926e335d47cd84ab4eb72c1a31c7d4369f614aaae2415534ee97ac4b058875f2"} pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 08:47:35 crc kubenswrapper[4813]: I0217 08:47:35.167832 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" containerID="cri-o://926e335d47cd84ab4eb72c1a31c7d4369f614aaae2415534ee97ac4b058875f2" gracePeriod=600 Feb 17 08:47:35 crc kubenswrapper[4813]: I0217 08:47:35.418923 4813 generic.go:334] "Generic (PLEG): container finished" podID="3a6ba827-b08b-4163-b067-d9adb119398d" containerID="926e335d47cd84ab4eb72c1a31c7d4369f614aaae2415534ee97ac4b058875f2" exitCode=0 Feb 17 08:47:35 crc kubenswrapper[4813]: I0217 08:47:35.419032 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerDied","Data":"926e335d47cd84ab4eb72c1a31c7d4369f614aaae2415534ee97ac4b058875f2"} Feb 17 08:47:35 crc kubenswrapper[4813]: I0217 08:47:35.419398 4813 scope.go:117] "RemoveContainer" containerID="e8bbac5e7eee4ad535b6f2568bd0459d254170dcba13d0214a55985b939c00e1" Feb 17 08:47:35 crc kubenswrapper[4813]: I0217 08:47:35.998921 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" podUID="41699b82-4fbd-4bc2-a45c-6971618962df" containerName="registry" containerID="cri-o://1e0d36ca4c88d246c6378b0896045bfc8f6c9af3eae4e93ca976fab6d3ed34af" gracePeriod=30 Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.429373 4813 generic.go:334] "Generic (PLEG): container finished" podID="41699b82-4fbd-4bc2-a45c-6971618962df" containerID="1e0d36ca4c88d246c6378b0896045bfc8f6c9af3eae4e93ca976fab6d3ed34af" exitCode=0 Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.429480 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" event={"ID":"41699b82-4fbd-4bc2-a45c-6971618962df","Type":"ContainerDied","Data":"1e0d36ca4c88d246c6378b0896045bfc8f6c9af3eae4e93ca976fab6d3ed34af"} Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.433059 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerStarted","Data":"b435a016b264f7638ac4f0875359992eccdfebd43455e04664f08b4b9bf401cf"} Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.538052 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.691371 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd7nq\" (UniqueName: \"kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-kube-api-access-xd7nq\") pod \"41699b82-4fbd-4bc2-a45c-6971618962df\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.691472 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/41699b82-4fbd-4bc2-a45c-6971618962df-installation-pull-secrets\") pod \"41699b82-4fbd-4bc2-a45c-6971618962df\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.691544 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-registry-tls\") pod \"41699b82-4fbd-4bc2-a45c-6971618962df\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.691657 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-bound-sa-token\") pod \"41699b82-4fbd-4bc2-a45c-6971618962df\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.691703 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41699b82-4fbd-4bc2-a45c-6971618962df-trusted-ca\") pod \"41699b82-4fbd-4bc2-a45c-6971618962df\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.691793 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/41699b82-4fbd-4bc2-a45c-6971618962df-registry-certificates\") pod \"41699b82-4fbd-4bc2-a45c-6971618962df\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.691860 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/41699b82-4fbd-4bc2-a45c-6971618962df-ca-trust-extracted\") pod \"41699b82-4fbd-4bc2-a45c-6971618962df\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.692102 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"41699b82-4fbd-4bc2-a45c-6971618962df\" (UID: \"41699b82-4fbd-4bc2-a45c-6971618962df\") " Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.693028 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41699b82-4fbd-4bc2-a45c-6971618962df-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "41699b82-4fbd-4bc2-a45c-6971618962df" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.693191 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41699b82-4fbd-4bc2-a45c-6971618962df-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "41699b82-4fbd-4bc2-a45c-6971618962df" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.701762 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-kube-api-access-xd7nq" (OuterVolumeSpecName: "kube-api-access-xd7nq") pod "41699b82-4fbd-4bc2-a45c-6971618962df" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df"). InnerVolumeSpecName "kube-api-access-xd7nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.702483 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "41699b82-4fbd-4bc2-a45c-6971618962df" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.704007 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "41699b82-4fbd-4bc2-a45c-6971618962df" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.708300 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41699b82-4fbd-4bc2-a45c-6971618962df-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "41699b82-4fbd-4bc2-a45c-6971618962df" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.713991 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "41699b82-4fbd-4bc2-a45c-6971618962df" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.734502 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41699b82-4fbd-4bc2-a45c-6971618962df-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "41699b82-4fbd-4bc2-a45c-6971618962df" (UID: "41699b82-4fbd-4bc2-a45c-6971618962df"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.794423 4813 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/41699b82-4fbd-4bc2-a45c-6971618962df-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.794480 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd7nq\" (UniqueName: \"kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-kube-api-access-xd7nq\") on node \"crc\" DevicePath \"\"" Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.794503 4813 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/41699b82-4fbd-4bc2-a45c-6971618962df-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.794524 4813 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.794544 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/41699b82-4fbd-4bc2-a45c-6971618962df-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.794563 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41699b82-4fbd-4bc2-a45c-6971618962df-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:47:36 crc kubenswrapper[4813]: I0217 08:47:36.794581 4813 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/41699b82-4fbd-4bc2-a45c-6971618962df-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 08:47:37 crc kubenswrapper[4813]: I0217 08:47:37.443911 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" event={"ID":"41699b82-4fbd-4bc2-a45c-6971618962df","Type":"ContainerDied","Data":"04567e2f1a1502fd0886e5ea8fc1c4287a46a5110465403433a0cfacdfede547"} Feb 17 08:47:37 crc kubenswrapper[4813]: I0217 08:47:37.443924 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9pw9r" Feb 17 08:47:37 crc kubenswrapper[4813]: I0217 08:47:37.444461 4813 scope.go:117] "RemoveContainer" containerID="1e0d36ca4c88d246c6378b0896045bfc8f6c9af3eae4e93ca976fab6d3ed34af" Feb 17 08:47:37 crc kubenswrapper[4813]: I0217 08:47:37.482992 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9pw9r"] Feb 17 08:47:37 crc kubenswrapper[4813]: I0217 08:47:37.508374 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9pw9r"] Feb 17 08:47:39 crc kubenswrapper[4813]: I0217 08:47:39.122121 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41699b82-4fbd-4bc2-a45c-6971618962df" path="/var/lib/kubelet/pods/41699b82-4fbd-4bc2-a45c-6971618962df/volumes" Feb 17 08:49:35 crc kubenswrapper[4813]: I0217 08:49:35.165987 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:49:35 crc kubenswrapper[4813]: I0217 08:49:35.166747 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:50:05 crc kubenswrapper[4813]: I0217 08:50:05.166298 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:50:05 crc kubenswrapper[4813]: I0217 08:50:05.167092 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:50:35 crc kubenswrapper[4813]: I0217 08:50:35.165603 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:50:35 crc kubenswrapper[4813]: I0217 08:50:35.166440 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:50:35 crc kubenswrapper[4813]: I0217 08:50:35.166555 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:50:35 crc kubenswrapper[4813]: I0217 08:50:35.167475 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b435a016b264f7638ac4f0875359992eccdfebd43455e04664f08b4b9bf401cf"} pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 08:50:35 crc kubenswrapper[4813]: I0217 08:50:35.167574 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" containerID="cri-o://b435a016b264f7638ac4f0875359992eccdfebd43455e04664f08b4b9bf401cf" gracePeriod=600 Feb 17 08:50:35 crc kubenswrapper[4813]: I0217 08:50:35.723470 4813 generic.go:334] "Generic (PLEG): container finished" podID="3a6ba827-b08b-4163-b067-d9adb119398d" containerID="b435a016b264f7638ac4f0875359992eccdfebd43455e04664f08b4b9bf401cf" exitCode=0 Feb 17 08:50:35 crc kubenswrapper[4813]: I0217 08:50:35.723585 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerDied","Data":"b435a016b264f7638ac4f0875359992eccdfebd43455e04664f08b4b9bf401cf"} Feb 17 08:50:35 crc kubenswrapper[4813]: I0217 08:50:35.724004 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerStarted","Data":"7375dc71231db7ccf7ec9a93ed4b7c58981e373ed44891ec0dfde219ffc963ad"} Feb 17 08:50:35 crc kubenswrapper[4813]: I0217 08:50:35.724113 4813 scope.go:117] "RemoveContainer" containerID="926e335d47cd84ab4eb72c1a31c7d4369f614aaae2415534ee97ac4b058875f2" Feb 17 08:51:03 crc kubenswrapper[4813]: I0217 08:51:03.770761 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs"] Feb 17 08:51:03 crc kubenswrapper[4813]: E0217 08:51:03.771744 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41699b82-4fbd-4bc2-a45c-6971618962df" containerName="registry" Feb 17 08:51:03 crc kubenswrapper[4813]: I0217 08:51:03.771763 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="41699b82-4fbd-4bc2-a45c-6971618962df" containerName="registry" Feb 17 08:51:03 crc kubenswrapper[4813]: I0217 08:51:03.771918 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="41699b82-4fbd-4bc2-a45c-6971618962df" containerName="registry" Feb 17 08:51:03 crc kubenswrapper[4813]: I0217 08:51:03.772954 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" Feb 17 08:51:03 crc kubenswrapper[4813]: I0217 08:51:03.775880 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 08:51:03 crc kubenswrapper[4813]: I0217 08:51:03.784695 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs"] Feb 17 08:51:03 crc kubenswrapper[4813]: I0217 08:51:03.874743 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht8gx\" (UniqueName: \"kubernetes.io/projected/b3262ba6-e759-4186-8461-29bda3c97987-kube-api-access-ht8gx\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs\" (UID: \"b3262ba6-e759-4186-8461-29bda3c97987\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" Feb 17 08:51:03 crc kubenswrapper[4813]: I0217 08:51:03.874923 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3262ba6-e759-4186-8461-29bda3c97987-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs\" (UID: \"b3262ba6-e759-4186-8461-29bda3c97987\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" Feb 17 08:51:03 crc kubenswrapper[4813]: I0217 08:51:03.875085 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3262ba6-e759-4186-8461-29bda3c97987-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs\" (UID: \"b3262ba6-e759-4186-8461-29bda3c97987\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" Feb 17 08:51:03 crc kubenswrapper[4813]: I0217 08:51:03.976591 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht8gx\" (UniqueName: \"kubernetes.io/projected/b3262ba6-e759-4186-8461-29bda3c97987-kube-api-access-ht8gx\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs\" (UID: \"b3262ba6-e759-4186-8461-29bda3c97987\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" Feb 17 08:51:03 crc kubenswrapper[4813]: I0217 08:51:03.976672 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3262ba6-e759-4186-8461-29bda3c97987-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs\" (UID: \"b3262ba6-e759-4186-8461-29bda3c97987\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" Feb 17 08:51:03 crc kubenswrapper[4813]: I0217 08:51:03.976723 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3262ba6-e759-4186-8461-29bda3c97987-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs\" (UID: \"b3262ba6-e759-4186-8461-29bda3c97987\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" Feb 17 08:51:03 crc kubenswrapper[4813]: I0217 08:51:03.977298 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3262ba6-e759-4186-8461-29bda3c97987-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs\" (UID: \"b3262ba6-e759-4186-8461-29bda3c97987\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" Feb 17 08:51:03 crc kubenswrapper[4813]: I0217 08:51:03.977664 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3262ba6-e759-4186-8461-29bda3c97987-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs\" (UID: \"b3262ba6-e759-4186-8461-29bda3c97987\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" Feb 17 08:51:04 crc kubenswrapper[4813]: I0217 08:51:04.001728 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht8gx\" (UniqueName: \"kubernetes.io/projected/b3262ba6-e759-4186-8461-29bda3c97987-kube-api-access-ht8gx\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs\" (UID: \"b3262ba6-e759-4186-8461-29bda3c97987\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" Feb 17 08:51:04 crc kubenswrapper[4813]: I0217 08:51:04.095810 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" Feb 17 08:51:04 crc kubenswrapper[4813]: I0217 08:51:04.527173 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs"] Feb 17 08:51:04 crc kubenswrapper[4813]: I0217 08:51:04.925180 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" event={"ID":"b3262ba6-e759-4186-8461-29bda3c97987","Type":"ContainerStarted","Data":"aad11b4e2948ea50c9c4df78219441493cb05073be36f82225b2afb678f27515"} Feb 17 08:51:04 crc kubenswrapper[4813]: I0217 08:51:04.925233 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" event={"ID":"b3262ba6-e759-4186-8461-29bda3c97987","Type":"ContainerStarted","Data":"159a7f7db3fe979df7fa4a8c1c1cb59307d95a7d39ddc17b0fcf408917f2314e"} Feb 17 08:51:05 crc kubenswrapper[4813]: I0217 08:51:05.932600 4813 generic.go:334] "Generic (PLEG): container finished" podID="b3262ba6-e759-4186-8461-29bda3c97987" containerID="aad11b4e2948ea50c9c4df78219441493cb05073be36f82225b2afb678f27515" exitCode=0 Feb 17 08:51:05 crc kubenswrapper[4813]: I0217 08:51:05.932666 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" event={"ID":"b3262ba6-e759-4186-8461-29bda3c97987","Type":"ContainerDied","Data":"aad11b4e2948ea50c9c4df78219441493cb05073be36f82225b2afb678f27515"} Feb 17 08:51:05 crc kubenswrapper[4813]: I0217 08:51:05.935298 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 08:51:07 crc kubenswrapper[4813]: I0217 08:51:07.951601 4813 generic.go:334] "Generic (PLEG): container finished" podID="b3262ba6-e759-4186-8461-29bda3c97987" containerID="7ce807ee46be7a807cdfc13e9e9df0c9660124a13f0e58b8c49c719f0133919c" exitCode=0 Feb 17 08:51:07 crc kubenswrapper[4813]: I0217 08:51:07.951672 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" event={"ID":"b3262ba6-e759-4186-8461-29bda3c97987","Type":"ContainerDied","Data":"7ce807ee46be7a807cdfc13e9e9df0c9660124a13f0e58b8c49c719f0133919c"} Feb 17 08:51:08 crc kubenswrapper[4813]: I0217 08:51:08.966123 4813 generic.go:334] "Generic (PLEG): container finished" podID="b3262ba6-e759-4186-8461-29bda3c97987" containerID="96034b846eff5aa29b17340e138717d5f1157cb223345def6b58b3ffddbd771f" exitCode=0 Feb 17 08:51:08 crc kubenswrapper[4813]: I0217 08:51:08.966200 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" event={"ID":"b3262ba6-e759-4186-8461-29bda3c97987","Type":"ContainerDied","Data":"96034b846eff5aa29b17340e138717d5f1157cb223345def6b58b3ffddbd771f"} Feb 17 08:51:10 crc kubenswrapper[4813]: I0217 08:51:10.343560 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" Feb 17 08:51:10 crc kubenswrapper[4813]: I0217 08:51:10.370188 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3262ba6-e759-4186-8461-29bda3c97987-util\") pod \"b3262ba6-e759-4186-8461-29bda3c97987\" (UID: \"b3262ba6-e759-4186-8461-29bda3c97987\") " Feb 17 08:51:10 crc kubenswrapper[4813]: I0217 08:51:10.370350 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3262ba6-e759-4186-8461-29bda3c97987-bundle\") pod \"b3262ba6-e759-4186-8461-29bda3c97987\" (UID: \"b3262ba6-e759-4186-8461-29bda3c97987\") " Feb 17 08:51:10 crc kubenswrapper[4813]: I0217 08:51:10.370414 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht8gx\" (UniqueName: \"kubernetes.io/projected/b3262ba6-e759-4186-8461-29bda3c97987-kube-api-access-ht8gx\") pod \"b3262ba6-e759-4186-8461-29bda3c97987\" (UID: \"b3262ba6-e759-4186-8461-29bda3c97987\") " Feb 17 08:51:10 crc kubenswrapper[4813]: I0217 08:51:10.376524 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3262ba6-e759-4186-8461-29bda3c97987-bundle" (OuterVolumeSpecName: "bundle") pod "b3262ba6-e759-4186-8461-29bda3c97987" (UID: "b3262ba6-e759-4186-8461-29bda3c97987"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:51:10 crc kubenswrapper[4813]: I0217 08:51:10.378027 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3262ba6-e759-4186-8461-29bda3c97987-kube-api-access-ht8gx" (OuterVolumeSpecName: "kube-api-access-ht8gx") pod "b3262ba6-e759-4186-8461-29bda3c97987" (UID: "b3262ba6-e759-4186-8461-29bda3c97987"). InnerVolumeSpecName "kube-api-access-ht8gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:51:10 crc kubenswrapper[4813]: I0217 08:51:10.387685 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3262ba6-e759-4186-8461-29bda3c97987-util" (OuterVolumeSpecName: "util") pod "b3262ba6-e759-4186-8461-29bda3c97987" (UID: "b3262ba6-e759-4186-8461-29bda3c97987"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:51:10 crc kubenswrapper[4813]: I0217 08:51:10.471632 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b3262ba6-e759-4186-8461-29bda3c97987-util\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:10 crc kubenswrapper[4813]: I0217 08:51:10.472032 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b3262ba6-e759-4186-8461-29bda3c97987-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:10 crc kubenswrapper[4813]: I0217 08:51:10.472046 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht8gx\" (UniqueName: \"kubernetes.io/projected/b3262ba6-e759-4186-8461-29bda3c97987-kube-api-access-ht8gx\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:10 crc kubenswrapper[4813]: I0217 08:51:10.987411 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" event={"ID":"b3262ba6-e759-4186-8461-29bda3c97987","Type":"ContainerDied","Data":"159a7f7db3fe979df7fa4a8c1c1cb59307d95a7d39ddc17b0fcf408917f2314e"} Feb 17 08:51:10 crc kubenswrapper[4813]: I0217 08:51:10.987473 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="159a7f7db3fe979df7fa4a8c1c1cb59307d95a7d39ddc17b0fcf408917f2314e" Feb 17 08:51:10 crc kubenswrapper[4813]: I0217 08:51:10.987518 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs" Feb 17 08:51:14 crc kubenswrapper[4813]: I0217 08:51:14.849619 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qsj6b"] Feb 17 08:51:14 crc kubenswrapper[4813]: I0217 08:51:14.850249 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovn-controller" containerID="cri-o://af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b" gracePeriod=30 Feb 17 08:51:14 crc kubenswrapper[4813]: I0217 08:51:14.850674 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="sbdb" containerID="cri-o://4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30" gracePeriod=30 Feb 17 08:51:14 crc kubenswrapper[4813]: I0217 08:51:14.850730 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="nbdb" containerID="cri-o://2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b" gracePeriod=30 Feb 17 08:51:14 crc kubenswrapper[4813]: I0217 08:51:14.850776 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="northd" containerID="cri-o://f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058" gracePeriod=30 Feb 17 08:51:14 crc kubenswrapper[4813]: I0217 08:51:14.850827 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126" gracePeriod=30 Feb 17 08:51:14 crc kubenswrapper[4813]: I0217 08:51:14.850874 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="kube-rbac-proxy-node" containerID="cri-o://1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028" gracePeriod=30 Feb 17 08:51:14 crc kubenswrapper[4813]: I0217 08:51:14.850921 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovn-acl-logging" containerID="cri-o://1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0" gracePeriod=30 Feb 17 08:51:14 crc kubenswrapper[4813]: I0217 08:51:14.895462 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovnkube-controller" containerID="cri-o://c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7" gracePeriod=30 Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.028508 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swpdn_9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0/kube-multus/2.log" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.028847 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swpdn_9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0/kube-multus/1.log" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.028887 4813 generic.go:334] "Generic (PLEG): container finished" podID="9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0" containerID="ca05934e6e6052c9b3a5fdb83e9bbbf8a47816ebac6cb95bb2f96e065b933856" exitCode=2 Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.028939 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-swpdn" event={"ID":"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0","Type":"ContainerDied","Data":"ca05934e6e6052c9b3a5fdb83e9bbbf8a47816ebac6cb95bb2f96e065b933856"} Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.028976 4813 scope.go:117] "RemoveContainer" containerID="05624bdef750b78c738592304644643c1215e4d7441442c786ced24049d9ce40" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.029375 4813 scope.go:117] "RemoveContainer" containerID="ca05934e6e6052c9b3a5fdb83e9bbbf8a47816ebac6cb95bb2f96e065b933856" Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.029643 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-swpdn_openshift-multus(9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0)\"" pod="openshift-multus/multus-swpdn" podUID="9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.034255 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovnkube-controller/3.log" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.036223 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovn-acl-logging/0.log" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.036827 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovn-controller/0.log" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.037144 4813 generic.go:334] "Generic (PLEG): container finished" podID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerID="c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7" exitCode=0 Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.037167 4813 generic.go:334] "Generic (PLEG): container finished" podID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerID="eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126" exitCode=0 Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.037176 4813 generic.go:334] "Generic (PLEG): container finished" podID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerID="1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028" exitCode=0 Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.037185 4813 generic.go:334] "Generic (PLEG): container finished" podID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerID="1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0" exitCode=143 Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.037192 4813 generic.go:334] "Generic (PLEG): container finished" podID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerID="af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b" exitCode=143 Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.037209 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerDied","Data":"c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7"} Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.037233 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerDied","Data":"eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126"} Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.037242 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerDied","Data":"1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028"} Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.037251 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerDied","Data":"1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0"} Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.037259 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerDied","Data":"af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b"} Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.143116 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovnkube-controller/3.log" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.144227 4813 scope.go:117] "RemoveContainer" containerID="fe774234275cc257862d5497446684be5312c2f7be4aa6c5ea55311eadd8c532" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.145964 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovn-acl-logging/0.log" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.146536 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovn-controller/0.log" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.147030 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194368 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6mph6"] Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.194558 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="northd" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194569 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="northd" Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.194579 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovnkube-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194585 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovnkube-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.194595 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194600 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.194608 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3262ba6-e759-4186-8461-29bda3c97987" containerName="extract" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194614 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3262ba6-e759-4186-8461-29bda3c97987" containerName="extract" Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.194620 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovnkube-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194626 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovnkube-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.194636 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3262ba6-e759-4186-8461-29bda3c97987" containerName="util" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194642 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3262ba6-e759-4186-8461-29bda3c97987" containerName="util" Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.194651 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovn-acl-logging" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194658 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovn-acl-logging" Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.194666 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="sbdb" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194672 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="sbdb" Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.194678 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovnkube-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194684 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovnkube-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.194691 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="kubecfg-setup" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194697 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="kubecfg-setup" Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.194705 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3262ba6-e759-4186-8461-29bda3c97987" containerName="pull" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194710 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3262ba6-e759-4186-8461-29bda3c97987" containerName="pull" Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.194718 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="nbdb" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194724 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="nbdb" Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.194731 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovn-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194737 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovn-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.194745 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="kube-rbac-proxy-node" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194750 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="kube-rbac-proxy-node" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194831 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovnkube-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194840 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovn-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194848 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="nbdb" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194856 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194863 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3262ba6-e759-4186-8461-29bda3c97987" containerName="extract" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194872 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovnkube-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194878 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovnkube-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194886 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovn-acl-logging" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194891 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="sbdb" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194899 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="kube-rbac-proxy-node" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194905 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="northd" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.194912 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovnkube-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.195000 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovnkube-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.195008 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovnkube-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: E0217 08:51:15.195017 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovnkube-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.195022 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovnkube-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.195100 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerName="ovnkube-controller" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.196636 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235421 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235467 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-run-ovn-kubernetes\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235490 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-node-log\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235515 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-slash\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235537 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-systemd\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235578 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3513e95a-8ab1-42f1-8aa5-37400db92720-ovn-node-metrics-cert\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235607 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-ovnkube-config\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235655 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-openvswitch\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235686 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-ovn\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235710 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-kubelet\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235736 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8p8t\" (UniqueName: \"kubernetes.io/projected/3513e95a-8ab1-42f1-8aa5-37400db92720-kube-api-access-z8p8t\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235781 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-cni-bin\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235816 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-env-overrides\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235843 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-etc-openvswitch\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235873 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-var-lib-openvswitch\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235892 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-cni-netd\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235923 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-log-socket\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235951 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-systemd-units\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.235983 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-run-netns\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236006 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-ovnkube-script-lib\") pod \"3513e95a-8ab1-42f1-8aa5-37400db92720\" (UID: \"3513e95a-8ab1-42f1-8aa5-37400db92720\") " Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236112 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ce435b4-870b-46b1-993f-b5114b568304-ovnkube-config\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236150 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ce435b4-870b-46b1-993f-b5114b568304-ovnkube-script-lib\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236183 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-slash\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236225 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ce435b4-870b-46b1-993f-b5114b568304-ovn-node-metrics-cert\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236252 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-etc-openvswitch\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236279 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-kubelet\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236337 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7lxb\" (UniqueName: \"kubernetes.io/projected/6ce435b4-870b-46b1-993f-b5114b568304-kube-api-access-r7lxb\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236369 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-node-log\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236394 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-cni-netd\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236422 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-cni-bin\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236442 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-run-systemd\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236460 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-var-lib-openvswitch\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236483 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-log-socket\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236510 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-run-ovn\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236530 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-run-netns\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236558 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-run-ovn-kubernetes\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236589 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-systemd-units\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236610 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-run-openvswitch\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236630 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236657 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ce435b4-870b-46b1-993f-b5114b568304-env-overrides\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236757 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236796 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236827 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-node-log" (OuterVolumeSpecName: "node-log") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.236849 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-slash" (OuterVolumeSpecName: "host-slash") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.237663 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.237663 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.237748 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.237744 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.237758 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-log-socket" (OuterVolumeSpecName: "log-socket") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.237783 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.237805 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.237812 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.237784 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.237838 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.238091 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.238131 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.238241 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.242873 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3513e95a-8ab1-42f1-8aa5-37400db92720-kube-api-access-z8p8t" (OuterVolumeSpecName: "kube-api-access-z8p8t") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "kube-api-access-z8p8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.243138 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3513e95a-8ab1-42f1-8aa5-37400db92720-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.249494 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "3513e95a-8ab1-42f1-8aa5-37400db92720" (UID: "3513e95a-8ab1-42f1-8aa5-37400db92720"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.338365 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7lxb\" (UniqueName: \"kubernetes.io/projected/6ce435b4-870b-46b1-993f-b5114b568304-kube-api-access-r7lxb\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.338444 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-node-log\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.338477 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-cni-netd\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.338512 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-run-systemd\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.338542 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-var-lib-openvswitch\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.338575 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-cni-bin\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.338610 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-log-socket\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.338653 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-run-ovn\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.338686 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-run-netns\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.338729 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-run-ovn-kubernetes\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.338775 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-systemd-units\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.338810 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-run-openvswitch\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.338843 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.338893 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ce435b4-870b-46b1-993f-b5114b568304-env-overrides\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.338947 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ce435b4-870b-46b1-993f-b5114b568304-ovnkube-config\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339003 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ce435b4-870b-46b1-993f-b5114b568304-ovnkube-script-lib\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339067 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-slash\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339143 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ce435b4-870b-46b1-993f-b5114b568304-ovn-node-metrics-cert\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339195 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-etc-openvswitch\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339246 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-kubelet\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339387 4813 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339426 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339453 4813 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339480 4813 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339505 4813 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-node-log\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339528 4813 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-slash\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339551 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-run-ovn-kubernetes\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339596 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-var-lib-openvswitch\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339675 4813 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339679 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-run-ovn\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339717 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-run-netns\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339728 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-kubelet\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339740 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-systemd-units\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339763 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-log-socket\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339781 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339782 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-slash\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339926 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3513e95a-8ab1-42f1-8aa5-37400db92720-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339956 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-run-openvswitch\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339975 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-run-systemd\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.339977 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-cni-netd\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340012 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-host-cni-bin\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340049 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-etc-openvswitch\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340113 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ce435b4-870b-46b1-993f-b5114b568304-node-log\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340138 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340161 4813 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340180 4813 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340198 4813 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340216 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8p8t\" (UniqueName: \"kubernetes.io/projected/3513e95a-8ab1-42f1-8aa5-37400db92720-kube-api-access-z8p8t\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340236 4813 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340255 4813 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3513e95a-8ab1-42f1-8aa5-37400db92720-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340273 4813 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340291 4813 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340331 4813 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340348 4813 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-log-socket\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340365 4813 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3513e95a-8ab1-42f1-8aa5-37400db92720-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340632 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ce435b4-870b-46b1-993f-b5114b568304-env-overrides\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.340656 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ce435b4-870b-46b1-993f-b5114b568304-ovnkube-script-lib\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.341067 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ce435b4-870b-46b1-993f-b5114b568304-ovnkube-config\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.345247 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ce435b4-870b-46b1-993f-b5114b568304-ovn-node-metrics-cert\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.366700 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7lxb\" (UniqueName: \"kubernetes.io/projected/6ce435b4-870b-46b1-993f-b5114b568304-kube-api-access-r7lxb\") pod \"ovnkube-node-6mph6\" (UID: \"6ce435b4-870b-46b1-993f-b5114b568304\") " pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:15 crc kubenswrapper[4813]: I0217 08:51:15.511003 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.043052 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swpdn_9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0/kube-multus/2.log" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.044327 4813 generic.go:334] "Generic (PLEG): container finished" podID="6ce435b4-870b-46b1-993f-b5114b568304" containerID="3d9dd4f2fd56e4fc723a550d77a2db84de1bf3452f271a4c51428059e3750180" exitCode=0 Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.044368 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" event={"ID":"6ce435b4-870b-46b1-993f-b5114b568304","Type":"ContainerDied","Data":"3d9dd4f2fd56e4fc723a550d77a2db84de1bf3452f271a4c51428059e3750180"} Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.044391 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" event={"ID":"6ce435b4-870b-46b1-993f-b5114b568304","Type":"ContainerStarted","Data":"e37ada57b4616319494d7a56e086d75221e18e8105c6b34b0bdbef028591dc2b"} Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.048625 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovn-acl-logging/0.log" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.049146 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qsj6b_3513e95a-8ab1-42f1-8aa5-37400db92720/ovn-controller/0.log" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.049672 4813 generic.go:334] "Generic (PLEG): container finished" podID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerID="4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30" exitCode=0 Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.049708 4813 generic.go:334] "Generic (PLEG): container finished" podID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerID="2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b" exitCode=0 Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.049722 4813 generic.go:334] "Generic (PLEG): container finished" podID="3513e95a-8ab1-42f1-8aa5-37400db92720" containerID="f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058" exitCode=0 Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.049756 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerDied","Data":"4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30"} Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.049785 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerDied","Data":"2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b"} Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.049801 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerDied","Data":"f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058"} Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.049817 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" event={"ID":"3513e95a-8ab1-42f1-8aa5-37400db92720","Type":"ContainerDied","Data":"db10204b695a5ec40f63524d829c13b4197fdeceac65069487d8d76acb908bdf"} Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.049837 4813 scope.go:117] "RemoveContainer" containerID="c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.049991 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qsj6b" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.069334 4813 scope.go:117] "RemoveContainer" containerID="4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.093004 4813 scope.go:117] "RemoveContainer" containerID="2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.123429 4813 scope.go:117] "RemoveContainer" containerID="f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.125463 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qsj6b"] Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.127957 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qsj6b"] Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.140889 4813 scope.go:117] "RemoveContainer" containerID="eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.155112 4813 scope.go:117] "RemoveContainer" containerID="1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.169729 4813 scope.go:117] "RemoveContainer" containerID="1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.188908 4813 scope.go:117] "RemoveContainer" containerID="af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.205217 4813 scope.go:117] "RemoveContainer" containerID="b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.237896 4813 scope.go:117] "RemoveContainer" containerID="c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7" Feb 17 08:51:16 crc kubenswrapper[4813]: E0217 08:51:16.238269 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7\": container with ID starting with c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7 not found: ID does not exist" containerID="c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.238373 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7"} err="failed to get container status \"c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7\": rpc error: code = NotFound desc = could not find container \"c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7\": container with ID starting with c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.238395 4813 scope.go:117] "RemoveContainer" containerID="4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30" Feb 17 08:51:16 crc kubenswrapper[4813]: E0217 08:51:16.238664 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\": container with ID starting with 4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30 not found: ID does not exist" containerID="4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.238688 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30"} err="failed to get container status \"4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\": rpc error: code = NotFound desc = could not find container \"4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\": container with ID starting with 4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.238701 4813 scope.go:117] "RemoveContainer" containerID="2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b" Feb 17 08:51:16 crc kubenswrapper[4813]: E0217 08:51:16.239039 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\": container with ID starting with 2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b not found: ID does not exist" containerID="2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.239065 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b"} err="failed to get container status \"2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\": rpc error: code = NotFound desc = could not find container \"2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\": container with ID starting with 2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.239080 4813 scope.go:117] "RemoveContainer" containerID="f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058" Feb 17 08:51:16 crc kubenswrapper[4813]: E0217 08:51:16.239263 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\": container with ID starting with f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058 not found: ID does not exist" containerID="f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.239284 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058"} err="failed to get container status \"f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\": rpc error: code = NotFound desc = could not find container \"f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\": container with ID starting with f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.239296 4813 scope.go:117] "RemoveContainer" containerID="eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126" Feb 17 08:51:16 crc kubenswrapper[4813]: E0217 08:51:16.239473 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\": container with ID starting with eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126 not found: ID does not exist" containerID="eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.239488 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126"} err="failed to get container status \"eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\": rpc error: code = NotFound desc = could not find container \"eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\": container with ID starting with eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.239502 4813 scope.go:117] "RemoveContainer" containerID="1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028" Feb 17 08:51:16 crc kubenswrapper[4813]: E0217 08:51:16.239730 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\": container with ID starting with 1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028 not found: ID does not exist" containerID="1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.239752 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028"} err="failed to get container status \"1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\": rpc error: code = NotFound desc = could not find container \"1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\": container with ID starting with 1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.239764 4813 scope.go:117] "RemoveContainer" containerID="1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0" Feb 17 08:51:16 crc kubenswrapper[4813]: E0217 08:51:16.239974 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\": container with ID starting with 1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0 not found: ID does not exist" containerID="1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.239994 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0"} err="failed to get container status \"1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\": rpc error: code = NotFound desc = could not find container \"1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\": container with ID starting with 1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.240005 4813 scope.go:117] "RemoveContainer" containerID="af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b" Feb 17 08:51:16 crc kubenswrapper[4813]: E0217 08:51:16.240236 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\": container with ID starting with af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b not found: ID does not exist" containerID="af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.240256 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b"} err="failed to get container status \"af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\": rpc error: code = NotFound desc = could not find container \"af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\": container with ID starting with af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.240268 4813 scope.go:117] "RemoveContainer" containerID="b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a" Feb 17 08:51:16 crc kubenswrapper[4813]: E0217 08:51:16.240485 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\": container with ID starting with b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a not found: ID does not exist" containerID="b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.240504 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a"} err="failed to get container status \"b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\": rpc error: code = NotFound desc = could not find container \"b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\": container with ID starting with b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.240517 4813 scope.go:117] "RemoveContainer" containerID="c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.240716 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7"} err="failed to get container status \"c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7\": rpc error: code = NotFound desc = could not find container \"c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7\": container with ID starting with c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.240734 4813 scope.go:117] "RemoveContainer" containerID="4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.240953 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30"} err="failed to get container status \"4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\": rpc error: code = NotFound desc = could not find container \"4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\": container with ID starting with 4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.240977 4813 scope.go:117] "RemoveContainer" containerID="2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.241181 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b"} err="failed to get container status \"2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\": rpc error: code = NotFound desc = could not find container \"2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\": container with ID starting with 2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.241199 4813 scope.go:117] "RemoveContainer" containerID="f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.241413 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058"} err="failed to get container status \"f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\": rpc error: code = NotFound desc = could not find container \"f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\": container with ID starting with f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.241430 4813 scope.go:117] "RemoveContainer" containerID="eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.241736 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126"} err="failed to get container status \"eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\": rpc error: code = NotFound desc = could not find container \"eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\": container with ID starting with eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.241760 4813 scope.go:117] "RemoveContainer" containerID="1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.241946 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028"} err="failed to get container status \"1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\": rpc error: code = NotFound desc = could not find container \"1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\": container with ID starting with 1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.241963 4813 scope.go:117] "RemoveContainer" containerID="1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.242130 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0"} err="failed to get container status \"1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\": rpc error: code = NotFound desc = could not find container \"1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\": container with ID starting with 1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.242148 4813 scope.go:117] "RemoveContainer" containerID="af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.242376 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b"} err="failed to get container status \"af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\": rpc error: code = NotFound desc = could not find container \"af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\": container with ID starting with af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.242395 4813 scope.go:117] "RemoveContainer" containerID="b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.242578 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a"} err="failed to get container status \"b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\": rpc error: code = NotFound desc = could not find container \"b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\": container with ID starting with b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.242607 4813 scope.go:117] "RemoveContainer" containerID="c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.242788 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7"} err="failed to get container status \"c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7\": rpc error: code = NotFound desc = could not find container \"c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7\": container with ID starting with c838308e3f5ab3f5fdba2db9d7faa1e0c95169843d81b6f0c3aaa31943f276b7 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.242806 4813 scope.go:117] "RemoveContainer" containerID="4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.243005 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30"} err="failed to get container status \"4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\": rpc error: code = NotFound desc = could not find container \"4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30\": container with ID starting with 4213db1543e61bf680bb451cc837a8fcf7408bbd4d423a28b3aa18f86fc36c30 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.243022 4813 scope.go:117] "RemoveContainer" containerID="2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.249714 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b"} err="failed to get container status \"2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\": rpc error: code = NotFound desc = could not find container \"2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b\": container with ID starting with 2a8c3e65ea04f42467e757656796bed51add1ac57acac154fe1494ae7bb4287b not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.249753 4813 scope.go:117] "RemoveContainer" containerID="f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.250026 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058"} err="failed to get container status \"f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\": rpc error: code = NotFound desc = could not find container \"f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058\": container with ID starting with f6be9762879ad4a013c4c3c56b2dccaef39eceea97e9b5ca9769645ce9d6e058 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.250050 4813 scope.go:117] "RemoveContainer" containerID="eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.250292 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126"} err="failed to get container status \"eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\": rpc error: code = NotFound desc = could not find container \"eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126\": container with ID starting with eef199208766433280940417716a6fca3458cc5190f85a69a70a3280f7d2d126 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.250332 4813 scope.go:117] "RemoveContainer" containerID="1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.250611 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028"} err="failed to get container status \"1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\": rpc error: code = NotFound desc = could not find container \"1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028\": container with ID starting with 1c69a2ad66b10ebaf8b31bbf2d3246cdfe8b4edb2f1317f817e6583fbde09028 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.250638 4813 scope.go:117] "RemoveContainer" containerID="1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.257458 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0"} err="failed to get container status \"1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\": rpc error: code = NotFound desc = could not find container \"1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0\": container with ID starting with 1ec1dca4d300cd4444bd2014aae71ec3bc86713ac9ce8facd9b22378ee25f7f0 not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.257500 4813 scope.go:117] "RemoveContainer" containerID="af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.257857 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b"} err="failed to get container status \"af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\": rpc error: code = NotFound desc = could not find container \"af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b\": container with ID starting with af6512dbbbdeb29b8530287104fb34115aff94b3cecfa1e886ac342fc8aaf66b not found: ID does not exist" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.257899 4813 scope.go:117] "RemoveContainer" containerID="b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a" Feb 17 08:51:16 crc kubenswrapper[4813]: I0217 08:51:16.258273 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a"} err="failed to get container status \"b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\": rpc error: code = NotFound desc = could not find container \"b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a\": container with ID starting with b8e1f8a7f3f97ec1b7f873e73f61268736b2c189aa35da5fa7cc23c9202bcf3a not found: ID does not exist" Feb 17 08:51:17 crc kubenswrapper[4813]: I0217 08:51:17.056273 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" event={"ID":"6ce435b4-870b-46b1-993f-b5114b568304","Type":"ContainerStarted","Data":"5ab8a7eb82e28f344408e4fbb74551910ec308c27633c3d87afe49b4f619eb69"} Feb 17 08:51:17 crc kubenswrapper[4813]: I0217 08:51:17.057454 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" event={"ID":"6ce435b4-870b-46b1-993f-b5114b568304","Type":"ContainerStarted","Data":"23eb3ee8e4856afe96b1bb822c4d1a0e074898ba92f7c95251ec130812a7ed28"} Feb 17 08:51:17 crc kubenswrapper[4813]: I0217 08:51:17.057530 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" event={"ID":"6ce435b4-870b-46b1-993f-b5114b568304","Type":"ContainerStarted","Data":"e01e3d7baedbe5381d249adf874683ac14f6f7c27dc6cd85fee5ba35fcb773ca"} Feb 17 08:51:17 crc kubenswrapper[4813]: I0217 08:51:17.057597 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" event={"ID":"6ce435b4-870b-46b1-993f-b5114b568304","Type":"ContainerStarted","Data":"99e4258ea96cc4ed2dd04e07c0ba4999b72cac4fff4485a7b0a9292e9fb6a5f3"} Feb 17 08:51:17 crc kubenswrapper[4813]: I0217 08:51:17.057656 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" event={"ID":"6ce435b4-870b-46b1-993f-b5114b568304","Type":"ContainerStarted","Data":"c06e05d9e63ea77ec503179edc28cf358a7149595ff0b6c647a3ab63ef6ce532"} Feb 17 08:51:17 crc kubenswrapper[4813]: I0217 08:51:17.057708 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" event={"ID":"6ce435b4-870b-46b1-993f-b5114b568304","Type":"ContainerStarted","Data":"5684ae9062a28b16241f854ad5a6c1d3bb825269a7aa964d3536df685df8f5da"} Feb 17 08:51:17 crc kubenswrapper[4813]: I0217 08:51:17.117633 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3513e95a-8ab1-42f1-8aa5-37400db92720" path="/var/lib/kubelet/pods/3513e95a-8ab1-42f1-8aa5-37400db92720/volumes" Feb 17 08:51:19 crc kubenswrapper[4813]: I0217 08:51:19.078111 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" event={"ID":"6ce435b4-870b-46b1-993f-b5114b568304","Type":"ContainerStarted","Data":"1b503ce21933e72965deb6239b36866940cf4df6b4cda5b9bf434bbd9ba2b8e3"} Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.530820 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd"] Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.531659 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.533109 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-qkc6t" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.535341 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.536679 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.611584 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s72dh\" (UniqueName: \"kubernetes.io/projected/8c42bb3e-30f2-484f-98d6-cc3d6209897a-kube-api-access-s72dh\") pod \"obo-prometheus-operator-68bc856cb9-jzdcd\" (UID: \"8c42bb3e-30f2-484f-98d6-cc3d6209897a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.655992 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl"] Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.656862 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.661593 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv"] Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.662218 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.673430 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.673672 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-dgtsz" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.713376 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/898154a7-5f53-4b78-bd75-4c62b2e6cae1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5667669b-gvchv\" (UID: \"898154a7-5f53-4b78-bd75-4c62b2e6cae1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.713649 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s72dh\" (UniqueName: \"kubernetes.io/projected/8c42bb3e-30f2-484f-98d6-cc3d6209897a-kube-api-access-s72dh\") pod \"obo-prometheus-operator-68bc856cb9-jzdcd\" (UID: \"8c42bb3e-30f2-484f-98d6-cc3d6209897a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.713734 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3614da2d-8f3c-41bd-a31c-9d7fdef31fad-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5667669b-55qdl\" (UID: \"3614da2d-8f3c-41bd-a31c-9d7fdef31fad\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.713808 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3614da2d-8f3c-41bd-a31c-9d7fdef31fad-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5667669b-55qdl\" (UID: \"3614da2d-8f3c-41bd-a31c-9d7fdef31fad\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.713896 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/898154a7-5f53-4b78-bd75-4c62b2e6cae1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5667669b-gvchv\" (UID: \"898154a7-5f53-4b78-bd75-4c62b2e6cae1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.741117 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s72dh\" (UniqueName: \"kubernetes.io/projected/8c42bb3e-30f2-484f-98d6-cc3d6209897a-kube-api-access-s72dh\") pod \"obo-prometheus-operator-68bc856cb9-jzdcd\" (UID: \"8c42bb3e-30f2-484f-98d6-cc3d6209897a\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.814658 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/898154a7-5f53-4b78-bd75-4c62b2e6cae1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5667669b-gvchv\" (UID: \"898154a7-5f53-4b78-bd75-4c62b2e6cae1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.815140 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/898154a7-5f53-4b78-bd75-4c62b2e6cae1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5667669b-gvchv\" (UID: \"898154a7-5f53-4b78-bd75-4c62b2e6cae1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.815182 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3614da2d-8f3c-41bd-a31c-9d7fdef31fad-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5667669b-55qdl\" (UID: \"3614da2d-8f3c-41bd-a31c-9d7fdef31fad\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.815209 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3614da2d-8f3c-41bd-a31c-9d7fdef31fad-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5667669b-55qdl\" (UID: \"3614da2d-8f3c-41bd-a31c-9d7fdef31fad\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.819752 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3614da2d-8f3c-41bd-a31c-9d7fdef31fad-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5667669b-55qdl\" (UID: \"3614da2d-8f3c-41bd-a31c-9d7fdef31fad\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.819788 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/898154a7-5f53-4b78-bd75-4c62b2e6cae1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5667669b-gvchv\" (UID: \"898154a7-5f53-4b78-bd75-4c62b2e6cae1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.819925 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3614da2d-8f3c-41bd-a31c-9d7fdef31fad-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5667669b-55qdl\" (UID: \"3614da2d-8f3c-41bd-a31c-9d7fdef31fad\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.820034 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/898154a7-5f53-4b78-bd75-4c62b2e6cae1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5667669b-gvchv\" (UID: \"898154a7-5f53-4b78-bd75-4c62b2e6cae1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.845607 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-629tg"] Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.846628 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.848269 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.848329 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-826mj" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.848565 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 17 08:51:21 crc kubenswrapper[4813]: E0217 08:51:21.874067 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators_8c42bb3e-30f2-484f-98d6-cc3d6209897a_0(f71716dc0e65d61bcfe6bc332e01f08c384d3bd2b74d4297fdd090a5ed56af10): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:51:21 crc kubenswrapper[4813]: E0217 08:51:21.874125 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators_8c42bb3e-30f2-484f-98d6-cc3d6209897a_0(f71716dc0e65d61bcfe6bc332e01f08c384d3bd2b74d4297fdd090a5ed56af10): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:21 crc kubenswrapper[4813]: E0217 08:51:21.874146 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators_8c42bb3e-30f2-484f-98d6-cc3d6209897a_0(f71716dc0e65d61bcfe6bc332e01f08c384d3bd2b74d4297fdd090a5ed56af10): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:21 crc kubenswrapper[4813]: E0217 08:51:21.874184 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators(8c42bb3e-30f2-484f-98d6-cc3d6209897a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators(8c42bb3e-30f2-484f-98d6-cc3d6209897a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators_8c42bb3e-30f2-484f-98d6-cc3d6209897a_0(f71716dc0e65d61bcfe6bc332e01f08c384d3bd2b74d4297fdd090a5ed56af10): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" podUID="8c42bb3e-30f2-484f-98d6-cc3d6209897a" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.915877 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/05000f8d-0078-48a4-a118-e184c008b5d4-observability-operator-tls\") pod \"observability-operator-59bdc8b94-629tg\" (UID: \"05000f8d-0078-48a4-a118-e184c008b5d4\") " pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.915924 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l27w\" (UniqueName: \"kubernetes.io/projected/05000f8d-0078-48a4-a118-e184c008b5d4-kube-api-access-5l27w\") pod \"observability-operator-59bdc8b94-629tg\" (UID: \"05000f8d-0078-48a4-a118-e184c008b5d4\") " pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.957808 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5xprj"] Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.958411 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.961060 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-lt6np" Feb 17 08:51:21 crc kubenswrapper[4813]: I0217 08:51:21.974151 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.007387 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators_3614da2d-8f3c-41bd-a31c-9d7fdef31fad_0(23cc84bcc2eee59e50dd3656948ea0c2a435233a8c12608a41a08e2aff5fa658): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.007485 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators_3614da2d-8f3c-41bd-a31c-9d7fdef31fad_0(23cc84bcc2eee59e50dd3656948ea0c2a435233a8c12608a41a08e2aff5fa658): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.007512 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators_3614da2d-8f3c-41bd-a31c-9d7fdef31fad_0(23cc84bcc2eee59e50dd3656948ea0c2a435233a8c12608a41a08e2aff5fa658): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.007569 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators(3614da2d-8f3c-41bd-a31c-9d7fdef31fad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators(3614da2d-8f3c-41bd-a31c-9d7fdef31fad)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators_3614da2d-8f3c-41bd-a31c-9d7fdef31fad_0(23cc84bcc2eee59e50dd3656948ea0c2a435233a8c12608a41a08e2aff5fa658): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" podUID="3614da2d-8f3c-41bd-a31c-9d7fdef31fad" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.016755 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/05000f8d-0078-48a4-a118-e184c008b5d4-observability-operator-tls\") pod \"observability-operator-59bdc8b94-629tg\" (UID: \"05000f8d-0078-48a4-a118-e184c008b5d4\") " pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.017218 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l27w\" (UniqueName: \"kubernetes.io/projected/05000f8d-0078-48a4-a118-e184c008b5d4-kube-api-access-5l27w\") pod \"observability-operator-59bdc8b94-629tg\" (UID: \"05000f8d-0078-48a4-a118-e184c008b5d4\") " pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.017252 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c76875de-7ea3-431b-882a-e12415659320-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5xprj\" (UID: \"c76875de-7ea3-431b-882a-e12415659320\") " pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.017273 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdm6s\" (UniqueName: \"kubernetes.io/projected/c76875de-7ea3-431b-882a-e12415659320-kube-api-access-jdm6s\") pod \"perses-operator-5bf474d74f-5xprj\" (UID: \"c76875de-7ea3-431b-882a-e12415659320\") " pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.022087 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/05000f8d-0078-48a4-a118-e184c008b5d4-observability-operator-tls\") pod \"observability-operator-59bdc8b94-629tg\" (UID: \"05000f8d-0078-48a4-a118-e184c008b5d4\") " pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.037917 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l27w\" (UniqueName: \"kubernetes.io/projected/05000f8d-0078-48a4-a118-e184c008b5d4-kube-api-access-5l27w\") pod \"observability-operator-59bdc8b94-629tg\" (UID: \"05000f8d-0078-48a4-a118-e184c008b5d4\") " pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.042169 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.059096 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators_898154a7-5f53-4b78-bd75-4c62b2e6cae1_0(e7e3b5d50149b7d5fc1c458d33c2c16a4616cbeefa7ed91a002021e32fa41b7b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.059158 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators_898154a7-5f53-4b78-bd75-4c62b2e6cae1_0(e7e3b5d50149b7d5fc1c458d33c2c16a4616cbeefa7ed91a002021e32fa41b7b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.059180 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators_898154a7-5f53-4b78-bd75-4c62b2e6cae1_0(e7e3b5d50149b7d5fc1c458d33c2c16a4616cbeefa7ed91a002021e32fa41b7b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.059224 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators(898154a7-5f53-4b78-bd75-4c62b2e6cae1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators(898154a7-5f53-4b78-bd75-4c62b2e6cae1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators_898154a7-5f53-4b78-bd75-4c62b2e6cae1_0(e7e3b5d50149b7d5fc1c458d33c2c16a4616cbeefa7ed91a002021e32fa41b7b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" podUID="898154a7-5f53-4b78-bd75-4c62b2e6cae1" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.097681 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" event={"ID":"6ce435b4-870b-46b1-993f-b5114b568304","Type":"ContainerStarted","Data":"f1436b10e0faae325ef2f15401d3883ff8d33855d6d6118803fe1bc15997297d"} Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.098836 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.098882 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.098891 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.118742 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c76875de-7ea3-431b-882a-e12415659320-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5xprj\" (UID: \"c76875de-7ea3-431b-882a-e12415659320\") " pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.119509 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdm6s\" (UniqueName: \"kubernetes.io/projected/c76875de-7ea3-431b-882a-e12415659320-kube-api-access-jdm6s\") pod \"perses-operator-5bf474d74f-5xprj\" (UID: \"c76875de-7ea3-431b-882a-e12415659320\") " pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.119466 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c76875de-7ea3-431b-882a-e12415659320-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5xprj\" (UID: \"c76875de-7ea3-431b-882a-e12415659320\") " pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.149122 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdm6s\" (UniqueName: \"kubernetes.io/projected/c76875de-7ea3-431b-882a-e12415659320-kube-api-access-jdm6s\") pod \"perses-operator-5bf474d74f-5xprj\" (UID: \"c76875de-7ea3-431b-882a-e12415659320\") " pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.159442 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" podStartSLOduration=7.159422997 podStartE2EDuration="7.159422997s" podCreationTimestamp="2026-02-17 08:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:51:22.144503078 +0000 UTC m=+629.805264301" watchObservedRunningTime="2026-02-17 08:51:22.159422997 +0000 UTC m=+629.820184220" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.163667 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.164677 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.169053 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.187856 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-629tg_openshift-operators_05000f8d-0078-48a4-a118-e184c008b5d4_0(fe4fa00c00b1a3317cca58af65e61f3edcc0b10efdf9024dc776127a68ecd463): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.187913 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-629tg_openshift-operators_05000f8d-0078-48a4-a118-e184c008b5d4_0(fe4fa00c00b1a3317cca58af65e61f3edcc0b10efdf9024dc776127a68ecd463): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.187934 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-629tg_openshift-operators_05000f8d-0078-48a4-a118-e184c008b5d4_0(fe4fa00c00b1a3317cca58af65e61f3edcc0b10efdf9024dc776127a68ecd463): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.187975 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-629tg_openshift-operators(05000f8d-0078-48a4-a118-e184c008b5d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-629tg_openshift-operators(05000f8d-0078-48a4-a118-e184c008b5d4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-629tg_openshift-operators_05000f8d-0078-48a4-a118-e184c008b5d4_0(fe4fa00c00b1a3317cca58af65e61f3edcc0b10efdf9024dc776127a68ecd463): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-629tg" podUID="05000f8d-0078-48a4-a118-e184c008b5d4" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.271203 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.289481 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5xprj_openshift-operators_c76875de-7ea3-431b-882a-e12415659320_0(8c38dce3299006aa0be61c5974df6496ff3c18c2f4d1b3f1e3fc3fc257ad42b9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.289543 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5xprj_openshift-operators_c76875de-7ea3-431b-882a-e12415659320_0(8c38dce3299006aa0be61c5974df6496ff3c18c2f4d1b3f1e3fc3fc257ad42b9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.289565 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5xprj_openshift-operators_c76875de-7ea3-431b-882a-e12415659320_0(8c38dce3299006aa0be61c5974df6496ff3c18c2f4d1b3f1e3fc3fc257ad42b9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.289611 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-5xprj_openshift-operators(c76875de-7ea3-431b-882a-e12415659320)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-5xprj_openshift-operators(c76875de-7ea3-431b-882a-e12415659320)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5xprj_openshift-operators_c76875de-7ea3-431b-882a-e12415659320_0(8c38dce3299006aa0be61c5974df6496ff3c18c2f4d1b3f1e3fc3fc257ad42b9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" podUID="c76875de-7ea3-431b-882a-e12415659320" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.434159 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5xprj"] Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.437855 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-629tg"] Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.440986 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd"] Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.441091 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.441630 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.461948 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv"] Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.462075 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.462625 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.463508 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators_8c42bb3e-30f2-484f-98d6-cc3d6209897a_0(8c9ff13752eac5a98187e8a06cdc2df7cb24a41fffb629691a644fcdb18ced19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.463572 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators_8c42bb3e-30f2-484f-98d6-cc3d6209897a_0(8c9ff13752eac5a98187e8a06cdc2df7cb24a41fffb629691a644fcdb18ced19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.463593 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators_8c42bb3e-30f2-484f-98d6-cc3d6209897a_0(8c9ff13752eac5a98187e8a06cdc2df7cb24a41fffb629691a644fcdb18ced19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.463635 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators(8c42bb3e-30f2-484f-98d6-cc3d6209897a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators(8c42bb3e-30f2-484f-98d6-cc3d6209897a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators_8c42bb3e-30f2-484f-98d6-cc3d6209897a_0(8c9ff13752eac5a98187e8a06cdc2df7cb24a41fffb629691a644fcdb18ced19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" podUID="8c42bb3e-30f2-484f-98d6-cc3d6209897a" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.476773 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl"] Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.476866 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:22 crc kubenswrapper[4813]: I0217 08:51:22.477245 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.482645 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators_898154a7-5f53-4b78-bd75-4c62b2e6cae1_0(084c879b928ef2ee96cfb5f6bb2dcf31d13a6ed6d1da4ec69caa9c69237139b5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.482699 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators_898154a7-5f53-4b78-bd75-4c62b2e6cae1_0(084c879b928ef2ee96cfb5f6bb2dcf31d13a6ed6d1da4ec69caa9c69237139b5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.482720 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators_898154a7-5f53-4b78-bd75-4c62b2e6cae1_0(084c879b928ef2ee96cfb5f6bb2dcf31d13a6ed6d1da4ec69caa9c69237139b5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.482764 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators(898154a7-5f53-4b78-bd75-4c62b2e6cae1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators(898154a7-5f53-4b78-bd75-4c62b2e6cae1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators_898154a7-5f53-4b78-bd75-4c62b2e6cae1_0(084c879b928ef2ee96cfb5f6bb2dcf31d13a6ed6d1da4ec69caa9c69237139b5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" podUID="898154a7-5f53-4b78-bd75-4c62b2e6cae1" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.522813 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators_3614da2d-8f3c-41bd-a31c-9d7fdef31fad_0(697d9cc54f63f4a12907df7c0d689d94f79de1ee7038b64ee90680286a115e90): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.522881 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators_3614da2d-8f3c-41bd-a31c-9d7fdef31fad_0(697d9cc54f63f4a12907df7c0d689d94f79de1ee7038b64ee90680286a115e90): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.522900 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators_3614da2d-8f3c-41bd-a31c-9d7fdef31fad_0(697d9cc54f63f4a12907df7c0d689d94f79de1ee7038b64ee90680286a115e90): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:22 crc kubenswrapper[4813]: E0217 08:51:22.522946 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators(3614da2d-8f3c-41bd-a31c-9d7fdef31fad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators(3614da2d-8f3c-41bd-a31c-9d7fdef31fad)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators_3614da2d-8f3c-41bd-a31c-9d7fdef31fad_0(697d9cc54f63f4a12907df7c0d689d94f79de1ee7038b64ee90680286a115e90): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" podUID="3614da2d-8f3c-41bd-a31c-9d7fdef31fad" Feb 17 08:51:23 crc kubenswrapper[4813]: I0217 08:51:23.104540 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:23 crc kubenswrapper[4813]: I0217 08:51:23.104567 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:23 crc kubenswrapper[4813]: I0217 08:51:23.105594 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:23 crc kubenswrapper[4813]: I0217 08:51:23.105731 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:23 crc kubenswrapper[4813]: E0217 08:51:23.162975 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-629tg_openshift-operators_05000f8d-0078-48a4-a118-e184c008b5d4_0(767aa79faee5f9c7c8d7a5cdf48c3129523266b8c44c4d8e1ab3ee791db8bdf7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:51:23 crc kubenswrapper[4813]: E0217 08:51:23.163065 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-629tg_openshift-operators_05000f8d-0078-48a4-a118-e184c008b5d4_0(767aa79faee5f9c7c8d7a5cdf48c3129523266b8c44c4d8e1ab3ee791db8bdf7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:23 crc kubenswrapper[4813]: E0217 08:51:23.163104 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-629tg_openshift-operators_05000f8d-0078-48a4-a118-e184c008b5d4_0(767aa79faee5f9c7c8d7a5cdf48c3129523266b8c44c4d8e1ab3ee791db8bdf7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:23 crc kubenswrapper[4813]: E0217 08:51:23.163185 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-629tg_openshift-operators(05000f8d-0078-48a4-a118-e184c008b5d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-629tg_openshift-operators(05000f8d-0078-48a4-a118-e184c008b5d4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-629tg_openshift-operators_05000f8d-0078-48a4-a118-e184c008b5d4_0(767aa79faee5f9c7c8d7a5cdf48c3129523266b8c44c4d8e1ab3ee791db8bdf7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-629tg" podUID="05000f8d-0078-48a4-a118-e184c008b5d4" Feb 17 08:51:23 crc kubenswrapper[4813]: E0217 08:51:23.175131 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5xprj_openshift-operators_c76875de-7ea3-431b-882a-e12415659320_0(a8854451bab24e02a85714504cd727c4008e4657250d9771494c37047b488dab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:51:23 crc kubenswrapper[4813]: E0217 08:51:23.175200 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5xprj_openshift-operators_c76875de-7ea3-431b-882a-e12415659320_0(a8854451bab24e02a85714504cd727c4008e4657250d9771494c37047b488dab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:23 crc kubenswrapper[4813]: E0217 08:51:23.175223 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5xprj_openshift-operators_c76875de-7ea3-431b-882a-e12415659320_0(a8854451bab24e02a85714504cd727c4008e4657250d9771494c37047b488dab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:23 crc kubenswrapper[4813]: E0217 08:51:23.175272 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-5xprj_openshift-operators(c76875de-7ea3-431b-882a-e12415659320)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-5xprj_openshift-operators(c76875de-7ea3-431b-882a-e12415659320)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5xprj_openshift-operators_c76875de-7ea3-431b-882a-e12415659320_0(a8854451bab24e02a85714504cd727c4008e4657250d9771494c37047b488dab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" podUID="c76875de-7ea3-431b-882a-e12415659320" Feb 17 08:51:26 crc kubenswrapper[4813]: I0217 08:51:26.111426 4813 scope.go:117] "RemoveContainer" containerID="ca05934e6e6052c9b3a5fdb83e9bbbf8a47816ebac6cb95bb2f96e065b933856" Feb 17 08:51:26 crc kubenswrapper[4813]: E0217 08:51:26.111831 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-swpdn_openshift-multus(9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0)\"" pod="openshift-multus/multus-swpdn" podUID="9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0" Feb 17 08:51:34 crc kubenswrapper[4813]: I0217 08:51:34.110519 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:34 crc kubenswrapper[4813]: I0217 08:51:34.110590 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:34 crc kubenswrapper[4813]: I0217 08:51:34.111921 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:34 crc kubenswrapper[4813]: I0217 08:51:34.111988 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:34 crc kubenswrapper[4813]: E0217 08:51:34.168337 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators_3614da2d-8f3c-41bd-a31c-9d7fdef31fad_0(4330e21b00a43e809a5444093cfc3e9141075ee8f4977ff569729886ecbfa5c8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:51:34 crc kubenswrapper[4813]: E0217 08:51:34.168435 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators_3614da2d-8f3c-41bd-a31c-9d7fdef31fad_0(4330e21b00a43e809a5444093cfc3e9141075ee8f4977ff569729886ecbfa5c8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:34 crc kubenswrapper[4813]: E0217 08:51:34.168471 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators_3614da2d-8f3c-41bd-a31c-9d7fdef31fad_0(4330e21b00a43e809a5444093cfc3e9141075ee8f4977ff569729886ecbfa5c8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:34 crc kubenswrapper[4813]: E0217 08:51:34.168540 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators(3614da2d-8f3c-41bd-a31c-9d7fdef31fad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators(3614da2d-8f3c-41bd-a31c-9d7fdef31fad)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-55qdl_openshift-operators_3614da2d-8f3c-41bd-a31c-9d7fdef31fad_0(4330e21b00a43e809a5444093cfc3e9141075ee8f4977ff569729886ecbfa5c8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" podUID="3614da2d-8f3c-41bd-a31c-9d7fdef31fad" Feb 17 08:51:34 crc kubenswrapper[4813]: E0217 08:51:34.177835 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators_8c42bb3e-30f2-484f-98d6-cc3d6209897a_0(dd76a6632fc02c318aa3b27c0b0bbfb3aa7cef9b24f17c9230cba900d125d183): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:51:34 crc kubenswrapper[4813]: E0217 08:51:34.177937 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators_8c42bb3e-30f2-484f-98d6-cc3d6209897a_0(dd76a6632fc02c318aa3b27c0b0bbfb3aa7cef9b24f17c9230cba900d125d183): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:34 crc kubenswrapper[4813]: E0217 08:51:34.177997 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators_8c42bb3e-30f2-484f-98d6-cc3d6209897a_0(dd76a6632fc02c318aa3b27c0b0bbfb3aa7cef9b24f17c9230cba900d125d183): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:34 crc kubenswrapper[4813]: E0217 08:51:34.178098 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators(8c42bb3e-30f2-484f-98d6-cc3d6209897a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators(8c42bb3e-30f2-484f-98d6-cc3d6209897a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-jzdcd_openshift-operators_8c42bb3e-30f2-484f-98d6-cc3d6209897a_0(dd76a6632fc02c318aa3b27c0b0bbfb3aa7cef9b24f17c9230cba900d125d183): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" podUID="8c42bb3e-30f2-484f-98d6-cc3d6209897a" Feb 17 08:51:36 crc kubenswrapper[4813]: I0217 08:51:36.110490 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:36 crc kubenswrapper[4813]: I0217 08:51:36.111670 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:36 crc kubenswrapper[4813]: E0217 08:51:36.133449 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-629tg_openshift-operators_05000f8d-0078-48a4-a118-e184c008b5d4_0(1b46f11cceb0b2e3cfd31a917997bcd910dfcca69ce24becfe216d77ace8149e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:51:36 crc kubenswrapper[4813]: E0217 08:51:36.133580 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-629tg_openshift-operators_05000f8d-0078-48a4-a118-e184c008b5d4_0(1b46f11cceb0b2e3cfd31a917997bcd910dfcca69ce24becfe216d77ace8149e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:36 crc kubenswrapper[4813]: E0217 08:51:36.133653 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-629tg_openshift-operators_05000f8d-0078-48a4-a118-e184c008b5d4_0(1b46f11cceb0b2e3cfd31a917997bcd910dfcca69ce24becfe216d77ace8149e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:36 crc kubenswrapper[4813]: E0217 08:51:36.133753 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-629tg_openshift-operators(05000f8d-0078-48a4-a118-e184c008b5d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-629tg_openshift-operators(05000f8d-0078-48a4-a118-e184c008b5d4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-629tg_openshift-operators_05000f8d-0078-48a4-a118-e184c008b5d4_0(1b46f11cceb0b2e3cfd31a917997bcd910dfcca69ce24becfe216d77ace8149e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-629tg" podUID="05000f8d-0078-48a4-a118-e184c008b5d4" Feb 17 08:51:37 crc kubenswrapper[4813]: I0217 08:51:37.110675 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:37 crc kubenswrapper[4813]: I0217 08:51:37.111529 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:37 crc kubenswrapper[4813]: E0217 08:51:37.135428 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators_898154a7-5f53-4b78-bd75-4c62b2e6cae1_0(638b3f6ade865b6ad0fd8de91d357a111162e81f46c740e524065c0bba327124): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:51:37 crc kubenswrapper[4813]: E0217 08:51:37.135516 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators_898154a7-5f53-4b78-bd75-4c62b2e6cae1_0(638b3f6ade865b6ad0fd8de91d357a111162e81f46c740e524065c0bba327124): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:37 crc kubenswrapper[4813]: E0217 08:51:37.135551 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators_898154a7-5f53-4b78-bd75-4c62b2e6cae1_0(638b3f6ade865b6ad0fd8de91d357a111162e81f46c740e524065c0bba327124): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:37 crc kubenswrapper[4813]: E0217 08:51:37.135649 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators(898154a7-5f53-4b78-bd75-4c62b2e6cae1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators(898154a7-5f53-4b78-bd75-4c62b2e6cae1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5667669b-gvchv_openshift-operators_898154a7-5f53-4b78-bd75-4c62b2e6cae1_0(638b3f6ade865b6ad0fd8de91d357a111162e81f46c740e524065c0bba327124): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" podUID="898154a7-5f53-4b78-bd75-4c62b2e6cae1" Feb 17 08:51:38 crc kubenswrapper[4813]: I0217 08:51:38.110444 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:38 crc kubenswrapper[4813]: I0217 08:51:38.110869 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:38 crc kubenswrapper[4813]: E0217 08:51:38.130107 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5xprj_openshift-operators_c76875de-7ea3-431b-882a-e12415659320_0(57fc27bc5d696613bd0e1750d250c580ecc1c7d4e6b8c13afde7a3d82d590e42): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 08:51:38 crc kubenswrapper[4813]: E0217 08:51:38.130196 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5xprj_openshift-operators_c76875de-7ea3-431b-882a-e12415659320_0(57fc27bc5d696613bd0e1750d250c580ecc1c7d4e6b8c13afde7a3d82d590e42): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:38 crc kubenswrapper[4813]: E0217 08:51:38.130224 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5xprj_openshift-operators_c76875de-7ea3-431b-882a-e12415659320_0(57fc27bc5d696613bd0e1750d250c580ecc1c7d4e6b8c13afde7a3d82d590e42): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:38 crc kubenswrapper[4813]: E0217 08:51:38.130289 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-5xprj_openshift-operators(c76875de-7ea3-431b-882a-e12415659320)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-5xprj_openshift-operators(c76875de-7ea3-431b-882a-e12415659320)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-5xprj_openshift-operators_c76875de-7ea3-431b-882a-e12415659320_0(57fc27bc5d696613bd0e1750d250c580ecc1c7d4e6b8c13afde7a3d82d590e42): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" podUID="c76875de-7ea3-431b-882a-e12415659320" Feb 17 08:51:39 crc kubenswrapper[4813]: I0217 08:51:39.112025 4813 scope.go:117] "RemoveContainer" containerID="ca05934e6e6052c9b3a5fdb83e9bbbf8a47816ebac6cb95bb2f96e065b933856" Feb 17 08:51:40 crc kubenswrapper[4813]: I0217 08:51:40.205265 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-swpdn_9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0/kube-multus/2.log" Feb 17 08:51:40 crc kubenswrapper[4813]: I0217 08:51:40.205567 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-swpdn" event={"ID":"9ce5e5e2-6f7a-4d35-9e26-c277e6b46cd0","Type":"ContainerStarted","Data":"51159a23b0a5ddb74397d4af7c18c1a21ac72177b23284a63efa74b56999ef6d"} Feb 17 08:51:45 crc kubenswrapper[4813]: I0217 08:51:45.111343 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:45 crc kubenswrapper[4813]: I0217 08:51:45.112405 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" Feb 17 08:51:45 crc kubenswrapper[4813]: I0217 08:51:45.404826 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd"] Feb 17 08:51:45 crc kubenswrapper[4813]: W0217 08:51:45.412670 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c42bb3e_30f2_484f_98d6_cc3d6209897a.slice/crio-c8d865bf6d107cfebca9a4c9430b6b1ce4cd787d9e320fcc9a7e6ba030945b17 WatchSource:0}: Error finding container c8d865bf6d107cfebca9a4c9430b6b1ce4cd787d9e320fcc9a7e6ba030945b17: Status 404 returned error can't find the container with id c8d865bf6d107cfebca9a4c9430b6b1ce4cd787d9e320fcc9a7e6ba030945b17 Feb 17 08:51:45 crc kubenswrapper[4813]: I0217 08:51:45.542229 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6mph6" Feb 17 08:51:46 crc kubenswrapper[4813]: I0217 08:51:46.110500 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:46 crc kubenswrapper[4813]: I0217 08:51:46.111197 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" Feb 17 08:51:46 crc kubenswrapper[4813]: I0217 08:51:46.245028 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" event={"ID":"8c42bb3e-30f2-484f-98d6-cc3d6209897a","Type":"ContainerStarted","Data":"c8d865bf6d107cfebca9a4c9430b6b1ce4cd787d9e320fcc9a7e6ba030945b17"} Feb 17 08:51:46 crc kubenswrapper[4813]: I0217 08:51:46.521506 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl"] Feb 17 08:51:46 crc kubenswrapper[4813]: W0217 08:51:46.527823 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3614da2d_8f3c_41bd_a31c_9d7fdef31fad.slice/crio-ea287bc9a3a3205e9b787ab454baded1bf43cb650cbd7a37f26f987a1f16eedf WatchSource:0}: Error finding container ea287bc9a3a3205e9b787ab454baded1bf43cb650cbd7a37f26f987a1f16eedf: Status 404 returned error can't find the container with id ea287bc9a3a3205e9b787ab454baded1bf43cb650cbd7a37f26f987a1f16eedf Feb 17 08:51:47 crc kubenswrapper[4813]: I0217 08:51:47.258962 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" event={"ID":"3614da2d-8f3c-41bd-a31c-9d7fdef31fad","Type":"ContainerStarted","Data":"ea287bc9a3a3205e9b787ab454baded1bf43cb650cbd7a37f26f987a1f16eedf"} Feb 17 08:51:49 crc kubenswrapper[4813]: I0217 08:51:49.111045 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:49 crc kubenswrapper[4813]: I0217 08:51:49.111265 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:49 crc kubenswrapper[4813]: I0217 08:51:49.111808 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" Feb 17 08:51:49 crc kubenswrapper[4813]: I0217 08:51:49.112114 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:50 crc kubenswrapper[4813]: I0217 08:51:50.111202 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:50 crc kubenswrapper[4813]: I0217 08:51:50.112107 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:50 crc kubenswrapper[4813]: I0217 08:51:50.846242 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-629tg"] Feb 17 08:51:50 crc kubenswrapper[4813]: W0217 08:51:50.849292 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05000f8d_0078_48a4_a118_e184c008b5d4.slice/crio-4f7158ec752a5bf73ef44586c69f8e401d6f74ea37ca34265ff474aa35646f10 WatchSource:0}: Error finding container 4f7158ec752a5bf73ef44586c69f8e401d6f74ea37ca34265ff474aa35646f10: Status 404 returned error can't find the container with id 4f7158ec752a5bf73ef44586c69f8e401d6f74ea37ca34265ff474aa35646f10 Feb 17 08:51:51 crc kubenswrapper[4813]: I0217 08:51:51.088058 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5xprj"] Feb 17 08:51:51 crc kubenswrapper[4813]: I0217 08:51:51.092062 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv"] Feb 17 08:51:51 crc kubenswrapper[4813]: I0217 08:51:51.637585 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" event={"ID":"c76875de-7ea3-431b-882a-e12415659320","Type":"ContainerStarted","Data":"d8a4b458a97c6633a220626834487e219a87178bfeb04cfe0f0041873a819b70"} Feb 17 08:51:51 crc kubenswrapper[4813]: I0217 08:51:51.641373 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" event={"ID":"898154a7-5f53-4b78-bd75-4c62b2e6cae1","Type":"ContainerStarted","Data":"9e5df436ac1c356ceb20ed2506542336f302435f8ce8ef251efca966c79306ce"} Feb 17 08:51:51 crc kubenswrapper[4813]: I0217 08:51:51.641447 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" event={"ID":"898154a7-5f53-4b78-bd75-4c62b2e6cae1","Type":"ContainerStarted","Data":"d45de4d5a50ebcc3dc87e38638148779d5b0c642ad12ec335edee67eb7325a66"} Feb 17 08:51:51 crc kubenswrapper[4813]: I0217 08:51:51.644283 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-629tg" event={"ID":"05000f8d-0078-48a4-a118-e184c008b5d4","Type":"ContainerStarted","Data":"4f7158ec752a5bf73ef44586c69f8e401d6f74ea37ca34265ff474aa35646f10"} Feb 17 08:51:51 crc kubenswrapper[4813]: I0217 08:51:51.646665 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" event={"ID":"8c42bb3e-30f2-484f-98d6-cc3d6209897a","Type":"ContainerStarted","Data":"dda5d5824baaa6199d3227e3c328f641fb964e122ac5f78112d09c2ab31a70af"} Feb 17 08:51:51 crc kubenswrapper[4813]: I0217 08:51:51.649505 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" event={"ID":"3614da2d-8f3c-41bd-a31c-9d7fdef31fad","Type":"ContainerStarted","Data":"290e13ad3d59d99da834aa31dc02652b0a5a68738402f8b60409bfdb28657043"} Feb 17 08:51:51 crc kubenswrapper[4813]: I0217 08:51:51.709515 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-55qdl" podStartSLOduration=27.077685951 podStartE2EDuration="30.709491462s" podCreationTimestamp="2026-02-17 08:51:21 +0000 UTC" firstStartedPulling="2026-02-17 08:51:46.531267329 +0000 UTC m=+654.192028552" lastFinishedPulling="2026-02-17 08:51:50.16307279 +0000 UTC m=+657.823834063" observedRunningTime="2026-02-17 08:51:51.706916689 +0000 UTC m=+659.367677952" watchObservedRunningTime="2026-02-17 08:51:51.709491462 +0000 UTC m=+659.370252705" Feb 17 08:51:51 crc kubenswrapper[4813]: I0217 08:51:51.710496 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5667669b-gvchv" podStartSLOduration=30.71048744 podStartE2EDuration="30.71048744s" podCreationTimestamp="2026-02-17 08:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:51:51.669107675 +0000 UTC m=+659.329868908" watchObservedRunningTime="2026-02-17 08:51:51.71048744 +0000 UTC m=+659.371248693" Feb 17 08:51:51 crc kubenswrapper[4813]: I0217 08:51:51.739767 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-jzdcd" podStartSLOduration=25.982067897 podStartE2EDuration="30.739746213s" podCreationTimestamp="2026-02-17 08:51:21 +0000 UTC" firstStartedPulling="2026-02-17 08:51:45.414826039 +0000 UTC m=+653.075587262" lastFinishedPulling="2026-02-17 08:51:50.172504315 +0000 UTC m=+657.833265578" observedRunningTime="2026-02-17 08:51:51.735625337 +0000 UTC m=+659.396386560" watchObservedRunningTime="2026-02-17 08:51:51.739746213 +0000 UTC m=+659.400507446" Feb 17 08:51:53 crc kubenswrapper[4813]: I0217 08:51:53.665577 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" event={"ID":"c76875de-7ea3-431b-882a-e12415659320","Type":"ContainerStarted","Data":"cd7d5936957f8e4820ae8eb6f8a8892eef1c724de35c863e73cd86230276d807"} Feb 17 08:51:53 crc kubenswrapper[4813]: I0217 08:51:53.666785 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:51:53 crc kubenswrapper[4813]: I0217 08:51:53.682054 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" podStartSLOduration=30.627949586 podStartE2EDuration="32.682040445s" podCreationTimestamp="2026-02-17 08:51:21 +0000 UTC" firstStartedPulling="2026-02-17 08:51:51.095781649 +0000 UTC m=+658.756542872" lastFinishedPulling="2026-02-17 08:51:53.149872508 +0000 UTC m=+660.810633731" observedRunningTime="2026-02-17 08:51:53.68186934 +0000 UTC m=+661.342630563" watchObservedRunningTime="2026-02-17 08:51:53.682040445 +0000 UTC m=+661.342801668" Feb 17 08:51:56 crc kubenswrapper[4813]: I0217 08:51:56.683727 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-629tg" event={"ID":"05000f8d-0078-48a4-a118-e184c008b5d4","Type":"ContainerStarted","Data":"e1460426484ea8357763898ea69771815e72af187e4fef8ae5c89d089c91a983"} Feb 17 08:51:56 crc kubenswrapper[4813]: I0217 08:51:56.684429 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:56 crc kubenswrapper[4813]: I0217 08:51:56.779909 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-629tg" Feb 17 08:51:56 crc kubenswrapper[4813]: I0217 08:51:56.824798 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-629tg" podStartSLOduration=30.934159823999998 podStartE2EDuration="35.824781972s" podCreationTimestamp="2026-02-17 08:51:21 +0000 UTC" firstStartedPulling="2026-02-17 08:51:50.851717211 +0000 UTC m=+658.512478434" lastFinishedPulling="2026-02-17 08:51:55.742339369 +0000 UTC m=+663.403100582" observedRunningTime="2026-02-17 08:51:56.723430189 +0000 UTC m=+664.384191422" watchObservedRunningTime="2026-02-17 08:51:56.824781972 +0000 UTC m=+664.485543195" Feb 17 08:52:01 crc kubenswrapper[4813]: I0217 08:52:01.810991 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6"] Feb 17 08:52:01 crc kubenswrapper[4813]: I0217 08:52:01.812455 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" Feb 17 08:52:01 crc kubenswrapper[4813]: I0217 08:52:01.816129 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 08:52:01 crc kubenswrapper[4813]: I0217 08:52:01.822848 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6"] Feb 17 08:52:01 crc kubenswrapper[4813]: I0217 08:52:01.892469 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c87b37ec-5473-4039-8dfa-d695c5f95a3e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6\" (UID: \"c87b37ec-5473-4039-8dfa-d695c5f95a3e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" Feb 17 08:52:01 crc kubenswrapper[4813]: I0217 08:52:01.892535 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqmkj\" (UniqueName: \"kubernetes.io/projected/c87b37ec-5473-4039-8dfa-d695c5f95a3e-kube-api-access-jqmkj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6\" (UID: \"c87b37ec-5473-4039-8dfa-d695c5f95a3e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" Feb 17 08:52:01 crc kubenswrapper[4813]: I0217 08:52:01.892655 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c87b37ec-5473-4039-8dfa-d695c5f95a3e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6\" (UID: \"c87b37ec-5473-4039-8dfa-d695c5f95a3e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" Feb 17 08:52:02 crc kubenswrapper[4813]: I0217 08:52:01.993408 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c87b37ec-5473-4039-8dfa-d695c5f95a3e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6\" (UID: \"c87b37ec-5473-4039-8dfa-d695c5f95a3e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" Feb 17 08:52:02 crc kubenswrapper[4813]: I0217 08:52:01.993497 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c87b37ec-5473-4039-8dfa-d695c5f95a3e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6\" (UID: \"c87b37ec-5473-4039-8dfa-d695c5f95a3e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" Feb 17 08:52:02 crc kubenswrapper[4813]: I0217 08:52:01.993532 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqmkj\" (UniqueName: \"kubernetes.io/projected/c87b37ec-5473-4039-8dfa-d695c5f95a3e-kube-api-access-jqmkj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6\" (UID: \"c87b37ec-5473-4039-8dfa-d695c5f95a3e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" Feb 17 08:52:02 crc kubenswrapper[4813]: I0217 08:52:01.993804 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c87b37ec-5473-4039-8dfa-d695c5f95a3e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6\" (UID: \"c87b37ec-5473-4039-8dfa-d695c5f95a3e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" Feb 17 08:52:02 crc kubenswrapper[4813]: I0217 08:52:01.993907 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c87b37ec-5473-4039-8dfa-d695c5f95a3e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6\" (UID: \"c87b37ec-5473-4039-8dfa-d695c5f95a3e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" Feb 17 08:52:02 crc kubenswrapper[4813]: I0217 08:52:02.024152 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqmkj\" (UniqueName: \"kubernetes.io/projected/c87b37ec-5473-4039-8dfa-d695c5f95a3e-kube-api-access-jqmkj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6\" (UID: \"c87b37ec-5473-4039-8dfa-d695c5f95a3e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" Feb 17 08:52:02 crc kubenswrapper[4813]: I0217 08:52:02.179802 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" Feb 17 08:52:02 crc kubenswrapper[4813]: I0217 08:52:02.275054 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-5xprj" Feb 17 08:52:02 crc kubenswrapper[4813]: I0217 08:52:02.487188 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6"] Feb 17 08:52:02 crc kubenswrapper[4813]: W0217 08:52:02.495875 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc87b37ec_5473_4039_8dfa_d695c5f95a3e.slice/crio-6f30357e07b8dce71519ce430cda7a1477930b176f935245ad1e504d08e158d5 WatchSource:0}: Error finding container 6f30357e07b8dce71519ce430cda7a1477930b176f935245ad1e504d08e158d5: Status 404 returned error can't find the container with id 6f30357e07b8dce71519ce430cda7a1477930b176f935245ad1e504d08e158d5 Feb 17 08:52:02 crc kubenswrapper[4813]: I0217 08:52:02.731027 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" event={"ID":"c87b37ec-5473-4039-8dfa-d695c5f95a3e","Type":"ContainerStarted","Data":"cf982784c84ec14e7a5f8d70d0bcd47ddd888ec3d7d61a820eade85f3ec52105"} Feb 17 08:52:02 crc kubenswrapper[4813]: I0217 08:52:02.731411 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" event={"ID":"c87b37ec-5473-4039-8dfa-d695c5f95a3e","Type":"ContainerStarted","Data":"6f30357e07b8dce71519ce430cda7a1477930b176f935245ad1e504d08e158d5"} Feb 17 08:52:03 crc kubenswrapper[4813]: I0217 08:52:03.739236 4813 generic.go:334] "Generic (PLEG): container finished" podID="c87b37ec-5473-4039-8dfa-d695c5f95a3e" containerID="cf982784c84ec14e7a5f8d70d0bcd47ddd888ec3d7d61a820eade85f3ec52105" exitCode=0 Feb 17 08:52:03 crc kubenswrapper[4813]: I0217 08:52:03.739289 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" event={"ID":"c87b37ec-5473-4039-8dfa-d695c5f95a3e","Type":"ContainerDied","Data":"cf982784c84ec14e7a5f8d70d0bcd47ddd888ec3d7d61a820eade85f3ec52105"} Feb 17 08:52:05 crc kubenswrapper[4813]: I0217 08:52:05.751603 4813 generic.go:334] "Generic (PLEG): container finished" podID="c87b37ec-5473-4039-8dfa-d695c5f95a3e" containerID="7e3544343f4b43ddd43e38720d0b862943d21094fa188aa83952615e0ab6f8a2" exitCode=0 Feb 17 08:52:05 crc kubenswrapper[4813]: I0217 08:52:05.751650 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" event={"ID":"c87b37ec-5473-4039-8dfa-d695c5f95a3e","Type":"ContainerDied","Data":"7e3544343f4b43ddd43e38720d0b862943d21094fa188aa83952615e0ab6f8a2"} Feb 17 08:52:06 crc kubenswrapper[4813]: I0217 08:52:06.771412 4813 generic.go:334] "Generic (PLEG): container finished" podID="c87b37ec-5473-4039-8dfa-d695c5f95a3e" containerID="dc7e199df1c848430e6a549d8f23d2dcbef4c0c4729e40d825e809e47034a725" exitCode=0 Feb 17 08:52:06 crc kubenswrapper[4813]: I0217 08:52:06.771633 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" event={"ID":"c87b37ec-5473-4039-8dfa-d695c5f95a3e","Type":"ContainerDied","Data":"dc7e199df1c848430e6a549d8f23d2dcbef4c0c4729e40d825e809e47034a725"} Feb 17 08:52:08 crc kubenswrapper[4813]: I0217 08:52:08.032666 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" Feb 17 08:52:08 crc kubenswrapper[4813]: I0217 08:52:08.166020 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c87b37ec-5473-4039-8dfa-d695c5f95a3e-util\") pod \"c87b37ec-5473-4039-8dfa-d695c5f95a3e\" (UID: \"c87b37ec-5473-4039-8dfa-d695c5f95a3e\") " Feb 17 08:52:08 crc kubenswrapper[4813]: I0217 08:52:08.166111 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqmkj\" (UniqueName: \"kubernetes.io/projected/c87b37ec-5473-4039-8dfa-d695c5f95a3e-kube-api-access-jqmkj\") pod \"c87b37ec-5473-4039-8dfa-d695c5f95a3e\" (UID: \"c87b37ec-5473-4039-8dfa-d695c5f95a3e\") " Feb 17 08:52:08 crc kubenswrapper[4813]: I0217 08:52:08.166167 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c87b37ec-5473-4039-8dfa-d695c5f95a3e-bundle\") pod \"c87b37ec-5473-4039-8dfa-d695c5f95a3e\" (UID: \"c87b37ec-5473-4039-8dfa-d695c5f95a3e\") " Feb 17 08:52:08 crc kubenswrapper[4813]: I0217 08:52:08.166884 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c87b37ec-5473-4039-8dfa-d695c5f95a3e-bundle" (OuterVolumeSpecName: "bundle") pod "c87b37ec-5473-4039-8dfa-d695c5f95a3e" (UID: "c87b37ec-5473-4039-8dfa-d695c5f95a3e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:52:08 crc kubenswrapper[4813]: I0217 08:52:08.167169 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c87b37ec-5473-4039-8dfa-d695c5f95a3e-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:52:08 crc kubenswrapper[4813]: I0217 08:52:08.174621 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c87b37ec-5473-4039-8dfa-d695c5f95a3e-kube-api-access-jqmkj" (OuterVolumeSpecName: "kube-api-access-jqmkj") pod "c87b37ec-5473-4039-8dfa-d695c5f95a3e" (UID: "c87b37ec-5473-4039-8dfa-d695c5f95a3e"). InnerVolumeSpecName "kube-api-access-jqmkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:52:08 crc kubenswrapper[4813]: I0217 08:52:08.256226 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c87b37ec-5473-4039-8dfa-d695c5f95a3e-util" (OuterVolumeSpecName: "util") pod "c87b37ec-5473-4039-8dfa-d695c5f95a3e" (UID: "c87b37ec-5473-4039-8dfa-d695c5f95a3e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:52:08 crc kubenswrapper[4813]: I0217 08:52:08.268767 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c87b37ec-5473-4039-8dfa-d695c5f95a3e-util\") on node \"crc\" DevicePath \"\"" Feb 17 08:52:08 crc kubenswrapper[4813]: I0217 08:52:08.268806 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqmkj\" (UniqueName: \"kubernetes.io/projected/c87b37ec-5473-4039-8dfa-d695c5f95a3e-kube-api-access-jqmkj\") on node \"crc\" DevicePath \"\"" Feb 17 08:52:08 crc kubenswrapper[4813]: I0217 08:52:08.788424 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" event={"ID":"c87b37ec-5473-4039-8dfa-d695c5f95a3e","Type":"ContainerDied","Data":"6f30357e07b8dce71519ce430cda7a1477930b176f935245ad1e504d08e158d5"} Feb 17 08:52:08 crc kubenswrapper[4813]: I0217 08:52:08.788469 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f30357e07b8dce71519ce430cda7a1477930b176f935245ad1e504d08e158d5" Feb 17 08:52:08 crc kubenswrapper[4813]: I0217 08:52:08.789000 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6" Feb 17 08:52:13 crc kubenswrapper[4813]: I0217 08:52:13.593267 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-dr2hx"] Feb 17 08:52:13 crc kubenswrapper[4813]: E0217 08:52:13.593848 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87b37ec-5473-4039-8dfa-d695c5f95a3e" containerName="pull" Feb 17 08:52:13 crc kubenswrapper[4813]: I0217 08:52:13.593864 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87b37ec-5473-4039-8dfa-d695c5f95a3e" containerName="pull" Feb 17 08:52:13 crc kubenswrapper[4813]: E0217 08:52:13.593887 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87b37ec-5473-4039-8dfa-d695c5f95a3e" containerName="extract" Feb 17 08:52:13 crc kubenswrapper[4813]: I0217 08:52:13.593894 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87b37ec-5473-4039-8dfa-d695c5f95a3e" containerName="extract" Feb 17 08:52:13 crc kubenswrapper[4813]: E0217 08:52:13.593909 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c87b37ec-5473-4039-8dfa-d695c5f95a3e" containerName="util" Feb 17 08:52:13 crc kubenswrapper[4813]: I0217 08:52:13.593917 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c87b37ec-5473-4039-8dfa-d695c5f95a3e" containerName="util" Feb 17 08:52:13 crc kubenswrapper[4813]: I0217 08:52:13.594049 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c87b37ec-5473-4039-8dfa-d695c5f95a3e" containerName="extract" Feb 17 08:52:13 crc kubenswrapper[4813]: I0217 08:52:13.594515 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-dr2hx" Feb 17 08:52:13 crc kubenswrapper[4813]: I0217 08:52:13.596519 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 17 08:52:13 crc kubenswrapper[4813]: I0217 08:52:13.597074 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-ppm6m" Feb 17 08:52:13 crc kubenswrapper[4813]: I0217 08:52:13.598160 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 17 08:52:13 crc kubenswrapper[4813]: I0217 08:52:13.614924 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-dr2hx"] Feb 17 08:52:13 crc kubenswrapper[4813]: I0217 08:52:13.737244 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b4qw\" (UniqueName: \"kubernetes.io/projected/613d6923-adfe-48e3-a162-941d317ec5fc-kube-api-access-5b4qw\") pod \"nmstate-operator-694c9596b7-dr2hx\" (UID: \"613d6923-adfe-48e3-a162-941d317ec5fc\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-dr2hx" Feb 17 08:52:13 crc kubenswrapper[4813]: I0217 08:52:13.838040 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b4qw\" (UniqueName: \"kubernetes.io/projected/613d6923-adfe-48e3-a162-941d317ec5fc-kube-api-access-5b4qw\") pod \"nmstate-operator-694c9596b7-dr2hx\" (UID: \"613d6923-adfe-48e3-a162-941d317ec5fc\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-dr2hx" Feb 17 08:52:13 crc kubenswrapper[4813]: I0217 08:52:13.855980 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b4qw\" (UniqueName: \"kubernetes.io/projected/613d6923-adfe-48e3-a162-941d317ec5fc-kube-api-access-5b4qw\") pod \"nmstate-operator-694c9596b7-dr2hx\" (UID: \"613d6923-adfe-48e3-a162-941d317ec5fc\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-dr2hx" Feb 17 08:52:13 crc kubenswrapper[4813]: I0217 08:52:13.908663 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-dr2hx" Feb 17 08:52:14 crc kubenswrapper[4813]: I0217 08:52:14.386626 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-dr2hx"] Feb 17 08:52:14 crc kubenswrapper[4813]: I0217 08:52:14.827735 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-dr2hx" event={"ID":"613d6923-adfe-48e3-a162-941d317ec5fc","Type":"ContainerStarted","Data":"21a81c4ac801085a6e85381d4ce4cfc87c50ec89eea993e73b0a46d6fbc65b3a"} Feb 17 08:52:18 crc kubenswrapper[4813]: I0217 08:52:18.852213 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-dr2hx" event={"ID":"613d6923-adfe-48e3-a162-941d317ec5fc","Type":"ContainerStarted","Data":"04ecdafbc6a04b216382a9417c1d01319067f7807668e01d206a852af6e36d39"} Feb 17 08:52:18 crc kubenswrapper[4813]: I0217 08:52:18.874484 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-dr2hx" podStartSLOduration=2.515461487 podStartE2EDuration="5.87446551s" podCreationTimestamp="2026-02-17 08:52:13 +0000 UTC" firstStartedPulling="2026-02-17 08:52:14.394622293 +0000 UTC m=+682.055383526" lastFinishedPulling="2026-02-17 08:52:17.753626336 +0000 UTC m=+685.414387549" observedRunningTime="2026-02-17 08:52:18.871418195 +0000 UTC m=+686.532179418" watchObservedRunningTime="2026-02-17 08:52:18.87446551 +0000 UTC m=+686.535226733" Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.890378 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-f5sws"] Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.891798 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5sws" Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.894232 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-cmkz2" Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.903253 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp"] Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.903864 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp" Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.907772 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-f5sws"] Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.907895 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.914235 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-k95j2"] Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.914874 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-k95j2" Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.922179 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp"] Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.974784 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e8f29774-0876-4d40-aa50-3ba424ae667c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-pg2bp\" (UID: \"e8f29774-0876-4d40-aa50-3ba424ae667c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp" Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.974834 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stdql\" (UniqueName: \"kubernetes.io/projected/e8f29774-0876-4d40-aa50-3ba424ae667c-kube-api-access-stdql\") pod \"nmstate-webhook-866bcb46dc-pg2bp\" (UID: \"e8f29774-0876-4d40-aa50-3ba424ae667c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp" Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.974870 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd-nmstate-lock\") pod \"nmstate-handler-k95j2\" (UID: \"c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd\") " pod="openshift-nmstate/nmstate-handler-k95j2" Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.974903 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl792\" (UniqueName: \"kubernetes.io/projected/c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd-kube-api-access-tl792\") pod \"nmstate-handler-k95j2\" (UID: \"c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd\") " pod="openshift-nmstate/nmstate-handler-k95j2" Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.974933 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd-dbus-socket\") pod \"nmstate-handler-k95j2\" (UID: \"c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd\") " pod="openshift-nmstate/nmstate-handler-k95j2" Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.974970 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd-ovs-socket\") pod \"nmstate-handler-k95j2\" (UID: \"c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd\") " pod="openshift-nmstate/nmstate-handler-k95j2" Feb 17 08:52:22 crc kubenswrapper[4813]: I0217 08:52:22.974985 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfw54\" (UniqueName: \"kubernetes.io/projected/68527739-5299-42bc-9b81-16ed9c46f0d0-kube-api-access-sfw54\") pod \"nmstate-metrics-58c85c668d-f5sws\" (UID: \"68527739-5299-42bc-9b81-16ed9c46f0d0\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5sws" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.032401 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg"] Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.033056 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.035026 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.036749 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hdsbj" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.036808 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.039721 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg"] Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.076663 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl792\" (UniqueName: \"kubernetes.io/projected/c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd-kube-api-access-tl792\") pod \"nmstate-handler-k95j2\" (UID: \"c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd\") " pod="openshift-nmstate/nmstate-handler-k95j2" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.076714 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/162fb627-930f-44ba-891b-34d91fda1558-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-j9ltg\" (UID: \"162fb627-930f-44ba-891b-34d91fda1558\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.076742 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/162fb627-930f-44ba-891b-34d91fda1558-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-j9ltg\" (UID: \"162fb627-930f-44ba-891b-34d91fda1558\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.076771 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd-dbus-socket\") pod \"nmstate-handler-k95j2\" (UID: \"c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd\") " pod="openshift-nmstate/nmstate-handler-k95j2" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.076810 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd-ovs-socket\") pod \"nmstate-handler-k95j2\" (UID: \"c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd\") " pod="openshift-nmstate/nmstate-handler-k95j2" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.076829 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfw54\" (UniqueName: \"kubernetes.io/projected/68527739-5299-42bc-9b81-16ed9c46f0d0-kube-api-access-sfw54\") pod \"nmstate-metrics-58c85c668d-f5sws\" (UID: \"68527739-5299-42bc-9b81-16ed9c46f0d0\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5sws" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.076847 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e8f29774-0876-4d40-aa50-3ba424ae667c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-pg2bp\" (UID: \"e8f29774-0876-4d40-aa50-3ba424ae667c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.076864 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stdql\" (UniqueName: \"kubernetes.io/projected/e8f29774-0876-4d40-aa50-3ba424ae667c-kube-api-access-stdql\") pod \"nmstate-webhook-866bcb46dc-pg2bp\" (UID: \"e8f29774-0876-4d40-aa50-3ba424ae667c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.076897 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd-nmstate-lock\") pod \"nmstate-handler-k95j2\" (UID: \"c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd\") " pod="openshift-nmstate/nmstate-handler-k95j2" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.076897 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd-ovs-socket\") pod \"nmstate-handler-k95j2\" (UID: \"c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd\") " pod="openshift-nmstate/nmstate-handler-k95j2" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.076915 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc22m\" (UniqueName: \"kubernetes.io/projected/162fb627-930f-44ba-891b-34d91fda1558-kube-api-access-zc22m\") pod \"nmstate-console-plugin-5c78fc5d65-j9ltg\" (UID: \"162fb627-930f-44ba-891b-34d91fda1558\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.077028 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd-nmstate-lock\") pod \"nmstate-handler-k95j2\" (UID: \"c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd\") " pod="openshift-nmstate/nmstate-handler-k95j2" Feb 17 08:52:23 crc kubenswrapper[4813]: E0217 08:52:23.077089 4813 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 17 08:52:23 crc kubenswrapper[4813]: E0217 08:52:23.077143 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8f29774-0876-4d40-aa50-3ba424ae667c-tls-key-pair podName:e8f29774-0876-4d40-aa50-3ba424ae667c nodeName:}" failed. No retries permitted until 2026-02-17 08:52:23.577126377 +0000 UTC m=+691.237887600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/e8f29774-0876-4d40-aa50-3ba424ae667c-tls-key-pair") pod "nmstate-webhook-866bcb46dc-pg2bp" (UID: "e8f29774-0876-4d40-aa50-3ba424ae667c") : secret "openshift-nmstate-webhook" not found Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.077171 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd-dbus-socket\") pod \"nmstate-handler-k95j2\" (UID: \"c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd\") " pod="openshift-nmstate/nmstate-handler-k95j2" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.093318 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stdql\" (UniqueName: \"kubernetes.io/projected/e8f29774-0876-4d40-aa50-3ba424ae667c-kube-api-access-stdql\") pod \"nmstate-webhook-866bcb46dc-pg2bp\" (UID: \"e8f29774-0876-4d40-aa50-3ba424ae667c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.097211 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfw54\" (UniqueName: \"kubernetes.io/projected/68527739-5299-42bc-9b81-16ed9c46f0d0-kube-api-access-sfw54\") pod \"nmstate-metrics-58c85c668d-f5sws\" (UID: \"68527739-5299-42bc-9b81-16ed9c46f0d0\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5sws" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.103226 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl792\" (UniqueName: \"kubernetes.io/projected/c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd-kube-api-access-tl792\") pod \"nmstate-handler-k95j2\" (UID: \"c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd\") " pod="openshift-nmstate/nmstate-handler-k95j2" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.178428 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc22m\" (UniqueName: \"kubernetes.io/projected/162fb627-930f-44ba-891b-34d91fda1558-kube-api-access-zc22m\") pod \"nmstate-console-plugin-5c78fc5d65-j9ltg\" (UID: \"162fb627-930f-44ba-891b-34d91fda1558\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.178512 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/162fb627-930f-44ba-891b-34d91fda1558-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-j9ltg\" (UID: \"162fb627-930f-44ba-891b-34d91fda1558\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.178541 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/162fb627-930f-44ba-891b-34d91fda1558-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-j9ltg\" (UID: \"162fb627-930f-44ba-891b-34d91fda1558\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg" Feb 17 08:52:23 crc kubenswrapper[4813]: E0217 08:52:23.179562 4813 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 17 08:52:23 crc kubenswrapper[4813]: E0217 08:52:23.179617 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/162fb627-930f-44ba-891b-34d91fda1558-plugin-serving-cert podName:162fb627-930f-44ba-891b-34d91fda1558 nodeName:}" failed. No retries permitted until 2026-02-17 08:52:23.679599981 +0000 UTC m=+691.340361214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/162fb627-930f-44ba-891b-34d91fda1558-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-j9ltg" (UID: "162fb627-930f-44ba-891b-34d91fda1558") : secret "plugin-serving-cert" not found Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.179676 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/162fb627-930f-44ba-891b-34d91fda1558-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-j9ltg\" (UID: \"162fb627-930f-44ba-891b-34d91fda1558\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.197807 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc22m\" (UniqueName: \"kubernetes.io/projected/162fb627-930f-44ba-891b-34d91fda1558-kube-api-access-zc22m\") pod \"nmstate-console-plugin-5c78fc5d65-j9ltg\" (UID: \"162fb627-930f-44ba-891b-34d91fda1558\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.215775 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-778567d7df-csf7r"] Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.216872 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.235222 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-778567d7df-csf7r"] Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.254760 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5sws" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.275999 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-k95j2" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.280714 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/711327fc-bdf0-4251-a90d-968ec048caa5-console-oauth-config\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.280753 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-trusted-ca-bundle\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.280779 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/711327fc-bdf0-4251-a90d-968ec048caa5-console-serving-cert\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.280796 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr4j7\" (UniqueName: \"kubernetes.io/projected/711327fc-bdf0-4251-a90d-968ec048caa5-kube-api-access-rr4j7\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.280832 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-console-config\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.280855 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-oauth-serving-cert\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.280904 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-service-ca\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.384402 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/711327fc-bdf0-4251-a90d-968ec048caa5-console-serving-cert\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.384726 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr4j7\" (UniqueName: \"kubernetes.io/projected/711327fc-bdf0-4251-a90d-968ec048caa5-kube-api-access-rr4j7\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.384784 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-console-config\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.384808 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-oauth-serving-cert\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.384865 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-service-ca\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.384888 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/711327fc-bdf0-4251-a90d-968ec048caa5-console-oauth-config\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.384907 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-trusted-ca-bundle\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.385924 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-trusted-ca-bundle\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.386525 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-service-ca\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.386947 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-oauth-serving-cert\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.387545 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-console-config\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.388707 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/711327fc-bdf0-4251-a90d-968ec048caa5-console-serving-cert\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.389444 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/711327fc-bdf0-4251-a90d-968ec048caa5-console-oauth-config\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.405750 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr4j7\" (UniqueName: \"kubernetes.io/projected/711327fc-bdf0-4251-a90d-968ec048caa5-kube-api-access-rr4j7\") pod \"console-778567d7df-csf7r\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.431257 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-f5sws"] Feb 17 08:52:23 crc kubenswrapper[4813]: W0217 08:52:23.441059 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68527739_5299_42bc_9b81_16ed9c46f0d0.slice/crio-5cc5796b937b575d1c5f1ab4091b71af0ff0a10ad0aef4e6c4cfe9e1382d06a2 WatchSource:0}: Error finding container 5cc5796b937b575d1c5f1ab4091b71af0ff0a10ad0aef4e6c4cfe9e1382d06a2: Status 404 returned error can't find the container with id 5cc5796b937b575d1c5f1ab4091b71af0ff0a10ad0aef4e6c4cfe9e1382d06a2 Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.530541 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.599589 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e8f29774-0876-4d40-aa50-3ba424ae667c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-pg2bp\" (UID: \"e8f29774-0876-4d40-aa50-3ba424ae667c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.607829 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e8f29774-0876-4d40-aa50-3ba424ae667c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-pg2bp\" (UID: \"e8f29774-0876-4d40-aa50-3ba424ae667c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.701114 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/162fb627-930f-44ba-891b-34d91fda1558-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-j9ltg\" (UID: \"162fb627-930f-44ba-891b-34d91fda1558\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.706250 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/162fb627-930f-44ba-891b-34d91fda1558-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-j9ltg\" (UID: \"162fb627-930f-44ba-891b-34d91fda1558\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.744491 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-778567d7df-csf7r"] Feb 17 08:52:23 crc kubenswrapper[4813]: W0217 08:52:23.751349 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod711327fc_bdf0_4251_a90d_968ec048caa5.slice/crio-24bfe783bfb7fbcda8909429d0ef36a425e5422091955b070af7dc4211a01ab0 WatchSource:0}: Error finding container 24bfe783bfb7fbcda8909429d0ef36a425e5422091955b070af7dc4211a01ab0: Status 404 returned error can't find the container with id 24bfe783bfb7fbcda8909429d0ef36a425e5422091955b070af7dc4211a01ab0 Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.865748 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp" Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.886550 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-k95j2" event={"ID":"c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd","Type":"ContainerStarted","Data":"51f468c52a7af1adb7b0606867729a605196bbb77c3f3597f29cc861b87369c1"} Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.887778 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5sws" event={"ID":"68527739-5299-42bc-9b81-16ed9c46f0d0","Type":"ContainerStarted","Data":"5cc5796b937b575d1c5f1ab4091b71af0ff0a10ad0aef4e6c4cfe9e1382d06a2"} Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.888747 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-778567d7df-csf7r" event={"ID":"711327fc-bdf0-4251-a90d-968ec048caa5","Type":"ContainerStarted","Data":"24bfe783bfb7fbcda8909429d0ef36a425e5422091955b070af7dc4211a01ab0"} Feb 17 08:52:23 crc kubenswrapper[4813]: I0217 08:52:23.945990 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg" Feb 17 08:52:24 crc kubenswrapper[4813]: I0217 08:52:24.162429 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp"] Feb 17 08:52:24 crc kubenswrapper[4813]: W0217 08:52:24.174480 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8f29774_0876_4d40_aa50_3ba424ae667c.slice/crio-016029f0491d66f8a8c10b08620c90355d0d8b031f510c6a8374144bb82e2d32 WatchSource:0}: Error finding container 016029f0491d66f8a8c10b08620c90355d0d8b031f510c6a8374144bb82e2d32: Status 404 returned error can't find the container with id 016029f0491d66f8a8c10b08620c90355d0d8b031f510c6a8374144bb82e2d32 Feb 17 08:52:24 crc kubenswrapper[4813]: I0217 08:52:24.260356 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg"] Feb 17 08:52:24 crc kubenswrapper[4813]: I0217 08:52:24.895086 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp" event={"ID":"e8f29774-0876-4d40-aa50-3ba424ae667c","Type":"ContainerStarted","Data":"016029f0491d66f8a8c10b08620c90355d0d8b031f510c6a8374144bb82e2d32"} Feb 17 08:52:24 crc kubenswrapper[4813]: I0217 08:52:24.896787 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg" event={"ID":"162fb627-930f-44ba-891b-34d91fda1558","Type":"ContainerStarted","Data":"6abc8f6018c668500d9e74255d5ccb5c6777ef653a464de1aaade0848ed330b8"} Feb 17 08:52:24 crc kubenswrapper[4813]: I0217 08:52:24.898612 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-778567d7df-csf7r" event={"ID":"711327fc-bdf0-4251-a90d-968ec048caa5","Type":"ContainerStarted","Data":"a092ff3bb949aec18104bcad58bd73bc06e5e6a0c4cd915b5399af53f74151f5"} Feb 17 08:52:24 crc kubenswrapper[4813]: I0217 08:52:24.920766 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-778567d7df-csf7r" podStartSLOduration=1.920749222 podStartE2EDuration="1.920749222s" podCreationTimestamp="2026-02-17 08:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:52:24.916469581 +0000 UTC m=+692.577230804" watchObservedRunningTime="2026-02-17 08:52:24.920749222 +0000 UTC m=+692.581510445" Feb 17 08:52:26 crc kubenswrapper[4813]: I0217 08:52:26.921823 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp" event={"ID":"e8f29774-0876-4d40-aa50-3ba424ae667c","Type":"ContainerStarted","Data":"49442adbab856dc092e903ad566daf10fbc7868206e23e116218eebbb961f1bb"} Feb 17 08:52:26 crc kubenswrapper[4813]: I0217 08:52:26.922342 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp" Feb 17 08:52:26 crc kubenswrapper[4813]: I0217 08:52:26.923674 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-k95j2" event={"ID":"c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd","Type":"ContainerStarted","Data":"a632af4ddc50752663b7ebef0303833ccd2ca511b7849f6958f3492cc0074641"} Feb 17 08:52:26 crc kubenswrapper[4813]: I0217 08:52:26.924398 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-k95j2" Feb 17 08:52:26 crc kubenswrapper[4813]: I0217 08:52:26.933554 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5sws" event={"ID":"68527739-5299-42bc-9b81-16ed9c46f0d0","Type":"ContainerStarted","Data":"7d65c29207e2f4421f207744fac6666f90033592310ae3831d1c7fa374fb6706"} Feb 17 08:52:26 crc kubenswrapper[4813]: I0217 08:52:26.948884 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp" podStartSLOduration=2.979908787 podStartE2EDuration="4.94886982s" podCreationTimestamp="2026-02-17 08:52:22 +0000 UTC" firstStartedPulling="2026-02-17 08:52:24.176944639 +0000 UTC m=+691.837705882" lastFinishedPulling="2026-02-17 08:52:26.145905652 +0000 UTC m=+693.806666915" observedRunningTime="2026-02-17 08:52:26.9481585 +0000 UTC m=+694.608919723" watchObservedRunningTime="2026-02-17 08:52:26.94886982 +0000 UTC m=+694.609631043" Feb 17 08:52:26 crc kubenswrapper[4813]: I0217 08:52:26.977490 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-k95j2" podStartSLOduration=1.988964829 podStartE2EDuration="4.977472055s" podCreationTimestamp="2026-02-17 08:52:22 +0000 UTC" firstStartedPulling="2026-02-17 08:52:23.305331529 +0000 UTC m=+690.966092752" lastFinishedPulling="2026-02-17 08:52:26.293838745 +0000 UTC m=+693.954599978" observedRunningTime="2026-02-17 08:52:26.971001903 +0000 UTC m=+694.631763126" watchObservedRunningTime="2026-02-17 08:52:26.977472055 +0000 UTC m=+694.638233278" Feb 17 08:52:27 crc kubenswrapper[4813]: I0217 08:52:27.940365 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg" event={"ID":"162fb627-930f-44ba-891b-34d91fda1558","Type":"ContainerStarted","Data":"164074c423a0346b357296c51bc2834420afd87cc66d8e9485c897c63cbee7dc"} Feb 17 08:52:27 crc kubenswrapper[4813]: I0217 08:52:27.967465 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-j9ltg" podStartSLOduration=1.6009315320000002 podStartE2EDuration="4.967446316s" podCreationTimestamp="2026-02-17 08:52:23 +0000 UTC" firstStartedPulling="2026-02-17 08:52:24.277467519 +0000 UTC m=+691.938228732" lastFinishedPulling="2026-02-17 08:52:27.643982283 +0000 UTC m=+695.304743516" observedRunningTime="2026-02-17 08:52:27.960097839 +0000 UTC m=+695.620859062" watchObservedRunningTime="2026-02-17 08:52:27.967446316 +0000 UTC m=+695.628207539" Feb 17 08:52:29 crc kubenswrapper[4813]: I0217 08:52:29.958865 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5sws" event={"ID":"68527739-5299-42bc-9b81-16ed9c46f0d0","Type":"ContainerStarted","Data":"e832bf20a788d8f9adb0dc18fada57d7a804b90a3e52c40323024fde7d263639"} Feb 17 08:52:29 crc kubenswrapper[4813]: I0217 08:52:29.986913 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-f5sws" podStartSLOduration=2.6011889679999998 podStartE2EDuration="7.986889099s" podCreationTimestamp="2026-02-17 08:52:22 +0000 UTC" firstStartedPulling="2026-02-17 08:52:23.44501642 +0000 UTC m=+691.105777663" lastFinishedPulling="2026-02-17 08:52:28.830716561 +0000 UTC m=+696.491477794" observedRunningTime="2026-02-17 08:52:29.983783492 +0000 UTC m=+697.644544775" watchObservedRunningTime="2026-02-17 08:52:29.986889099 +0000 UTC m=+697.647650352" Feb 17 08:52:33 crc kubenswrapper[4813]: I0217 08:52:33.316951 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-k95j2" Feb 17 08:52:33 crc kubenswrapper[4813]: I0217 08:52:33.531423 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:33 crc kubenswrapper[4813]: I0217 08:52:33.531786 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:33 crc kubenswrapper[4813]: I0217 08:52:33.538654 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:34 crc kubenswrapper[4813]: I0217 08:52:34.001698 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:52:34 crc kubenswrapper[4813]: I0217 08:52:34.073853 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-l2l9m"] Feb 17 08:52:35 crc kubenswrapper[4813]: I0217 08:52:35.165289 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:52:35 crc kubenswrapper[4813]: I0217 08:52:35.165416 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:52:43 crc kubenswrapper[4813]: I0217 08:52:43.872830 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-pg2bp" Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.185620 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-l2l9m" podUID="70da8a3c-ff49-4f82-a68b-d955c2cceb2b" containerName="console" containerID="cri-o://0ea50fdae480e0e3268b26ca90eb0863ada075b85118159e5b67ebbfb2040d47" gracePeriod=15 Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.631792 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-l2l9m_70da8a3c-ff49-4f82-a68b-d955c2cceb2b/console/0.log" Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.632121 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.682164 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-oauth-config\") pod \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.682210 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-trusted-ca-bundle\") pod \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.682255 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-service-ca\") pod \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.682272 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-oauth-serving-cert\") pod \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.682324 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp75m\" (UniqueName: \"kubernetes.io/projected/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-kube-api-access-rp75m\") pod \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.682370 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-config\") pod \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.682401 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-serving-cert\") pod \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\" (UID: \"70da8a3c-ff49-4f82-a68b-d955c2cceb2b\") " Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.683188 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "70da8a3c-ff49-4f82-a68b-d955c2cceb2b" (UID: "70da8a3c-ff49-4f82-a68b-d955c2cceb2b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.683273 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-service-ca" (OuterVolumeSpecName: "service-ca") pod "70da8a3c-ff49-4f82-a68b-d955c2cceb2b" (UID: "70da8a3c-ff49-4f82-a68b-d955c2cceb2b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.683379 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "70da8a3c-ff49-4f82-a68b-d955c2cceb2b" (UID: "70da8a3c-ff49-4f82-a68b-d955c2cceb2b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.683816 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.683848 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.683860 4813 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.684396 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-config" (OuterVolumeSpecName: "console-config") pod "70da8a3c-ff49-4f82-a68b-d955c2cceb2b" (UID: "70da8a3c-ff49-4f82-a68b-d955c2cceb2b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.696670 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "70da8a3c-ff49-4f82-a68b-d955c2cceb2b" (UID: "70da8a3c-ff49-4f82-a68b-d955c2cceb2b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.696704 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-kube-api-access-rp75m" (OuterVolumeSpecName: "kube-api-access-rp75m") pod "70da8a3c-ff49-4f82-a68b-d955c2cceb2b" (UID: "70da8a3c-ff49-4f82-a68b-d955c2cceb2b"). InnerVolumeSpecName "kube-api-access-rp75m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.697569 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "70da8a3c-ff49-4f82-a68b-d955c2cceb2b" (UID: "70da8a3c-ff49-4f82-a68b-d955c2cceb2b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.785240 4813 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.785289 4813 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.785327 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp75m\" (UniqueName: \"kubernetes.io/projected/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-kube-api-access-rp75m\") on node \"crc\" DevicePath \"\"" Feb 17 08:52:59 crc kubenswrapper[4813]: I0217 08:52:59.785348 4813 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/70da8a3c-ff49-4f82-a68b-d955c2cceb2b-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.160199 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr"] Feb 17 08:53:00 crc kubenswrapper[4813]: E0217 08:53:00.160549 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70da8a3c-ff49-4f82-a68b-d955c2cceb2b" containerName="console" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.160574 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="70da8a3c-ff49-4f82-a68b-d955c2cceb2b" containerName="console" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.160724 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="70da8a3c-ff49-4f82-a68b-d955c2cceb2b" containerName="console" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.161733 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.172626 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.176688 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr"] Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.207948 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-l2l9m_70da8a3c-ff49-4f82-a68b-d955c2cceb2b/console/0.log" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.208012 4813 generic.go:334] "Generic (PLEG): container finished" podID="70da8a3c-ff49-4f82-a68b-d955c2cceb2b" containerID="0ea50fdae480e0e3268b26ca90eb0863ada075b85118159e5b67ebbfb2040d47" exitCode=2 Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.208044 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l2l9m" event={"ID":"70da8a3c-ff49-4f82-a68b-d955c2cceb2b","Type":"ContainerDied","Data":"0ea50fdae480e0e3268b26ca90eb0863ada075b85118159e5b67ebbfb2040d47"} Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.208105 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l2l9m" event={"ID":"70da8a3c-ff49-4f82-a68b-d955c2cceb2b","Type":"ContainerDied","Data":"9aa14230565d0c1d1ef730a56409858648d46760d5859e78ba6fb6f7f8caa9d9"} Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.208109 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l2l9m" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.208127 4813 scope.go:117] "RemoveContainer" containerID="0ea50fdae480e0e3268b26ca90eb0863ada075b85118159e5b67ebbfb2040d47" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.233200 4813 scope.go:117] "RemoveContainer" containerID="0ea50fdae480e0e3268b26ca90eb0863ada075b85118159e5b67ebbfb2040d47" Feb 17 08:53:00 crc kubenswrapper[4813]: E0217 08:53:00.233721 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ea50fdae480e0e3268b26ca90eb0863ada075b85118159e5b67ebbfb2040d47\": container with ID starting with 0ea50fdae480e0e3268b26ca90eb0863ada075b85118159e5b67ebbfb2040d47 not found: ID does not exist" containerID="0ea50fdae480e0e3268b26ca90eb0863ada075b85118159e5b67ebbfb2040d47" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.233758 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ea50fdae480e0e3268b26ca90eb0863ada075b85118159e5b67ebbfb2040d47"} err="failed to get container status \"0ea50fdae480e0e3268b26ca90eb0863ada075b85118159e5b67ebbfb2040d47\": rpc error: code = NotFound desc = could not find container \"0ea50fdae480e0e3268b26ca90eb0863ada075b85118159e5b67ebbfb2040d47\": container with ID starting with 0ea50fdae480e0e3268b26ca90eb0863ada075b85118159e5b67ebbfb2040d47 not found: ID does not exist" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.267663 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-l2l9m"] Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.272812 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-l2l9m"] Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.291556 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45316cad-79c7-4ecb-801e-25dbf1c5d213-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr\" (UID: \"45316cad-79c7-4ecb-801e-25dbf1c5d213\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.291794 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmmk4\" (UniqueName: \"kubernetes.io/projected/45316cad-79c7-4ecb-801e-25dbf1c5d213-kube-api-access-lmmk4\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr\" (UID: \"45316cad-79c7-4ecb-801e-25dbf1c5d213\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.292005 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45316cad-79c7-4ecb-801e-25dbf1c5d213-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr\" (UID: \"45316cad-79c7-4ecb-801e-25dbf1c5d213\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.393186 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45316cad-79c7-4ecb-801e-25dbf1c5d213-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr\" (UID: \"45316cad-79c7-4ecb-801e-25dbf1c5d213\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.393288 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmmk4\" (UniqueName: \"kubernetes.io/projected/45316cad-79c7-4ecb-801e-25dbf1c5d213-kube-api-access-lmmk4\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr\" (UID: \"45316cad-79c7-4ecb-801e-25dbf1c5d213\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.393388 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45316cad-79c7-4ecb-801e-25dbf1c5d213-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr\" (UID: \"45316cad-79c7-4ecb-801e-25dbf1c5d213\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.394501 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45316cad-79c7-4ecb-801e-25dbf1c5d213-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr\" (UID: \"45316cad-79c7-4ecb-801e-25dbf1c5d213\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.394716 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45316cad-79c7-4ecb-801e-25dbf1c5d213-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr\" (UID: \"45316cad-79c7-4ecb-801e-25dbf1c5d213\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.425636 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmmk4\" (UniqueName: \"kubernetes.io/projected/45316cad-79c7-4ecb-801e-25dbf1c5d213-kube-api-access-lmmk4\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr\" (UID: \"45316cad-79c7-4ecb-801e-25dbf1c5d213\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.487175 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" Feb 17 08:53:00 crc kubenswrapper[4813]: I0217 08:53:00.793628 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr"] Feb 17 08:53:01 crc kubenswrapper[4813]: I0217 08:53:01.120670 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70da8a3c-ff49-4f82-a68b-d955c2cceb2b" path="/var/lib/kubelet/pods/70da8a3c-ff49-4f82-a68b-d955c2cceb2b/volumes" Feb 17 08:53:01 crc kubenswrapper[4813]: I0217 08:53:01.218677 4813 generic.go:334] "Generic (PLEG): container finished" podID="45316cad-79c7-4ecb-801e-25dbf1c5d213" containerID="32383341adc35775bfe1935d5bee4c2c56ca7044f4d6618657ed2e240098ab75" exitCode=0 Feb 17 08:53:01 crc kubenswrapper[4813]: I0217 08:53:01.218736 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" event={"ID":"45316cad-79c7-4ecb-801e-25dbf1c5d213","Type":"ContainerDied","Data":"32383341adc35775bfe1935d5bee4c2c56ca7044f4d6618657ed2e240098ab75"} Feb 17 08:53:01 crc kubenswrapper[4813]: I0217 08:53:01.219098 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" event={"ID":"45316cad-79c7-4ecb-801e-25dbf1c5d213","Type":"ContainerStarted","Data":"597ef080baf8a62a22d8c144a000b09ed34708a268c84513026005a7220253e7"} Feb 17 08:53:03 crc kubenswrapper[4813]: I0217 08:53:03.239923 4813 generic.go:334] "Generic (PLEG): container finished" podID="45316cad-79c7-4ecb-801e-25dbf1c5d213" containerID="c63b203a45adfdf36ec2ba785a772b4b4f90f0c217ae19da5d71c2acaa09cb47" exitCode=0 Feb 17 08:53:03 crc kubenswrapper[4813]: I0217 08:53:03.240018 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" event={"ID":"45316cad-79c7-4ecb-801e-25dbf1c5d213","Type":"ContainerDied","Data":"c63b203a45adfdf36ec2ba785a772b4b4f90f0c217ae19da5d71c2acaa09cb47"} Feb 17 08:53:04 crc kubenswrapper[4813]: I0217 08:53:04.251505 4813 generic.go:334] "Generic (PLEG): container finished" podID="45316cad-79c7-4ecb-801e-25dbf1c5d213" containerID="0d711ccd4e2978aae32202ea2cc6d0afbbe720d86d58954bdecf35a0e57ec566" exitCode=0 Feb 17 08:53:04 crc kubenswrapper[4813]: I0217 08:53:04.251571 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" event={"ID":"45316cad-79c7-4ecb-801e-25dbf1c5d213","Type":"ContainerDied","Data":"0d711ccd4e2978aae32202ea2cc6d0afbbe720d86d58954bdecf35a0e57ec566"} Feb 17 08:53:05 crc kubenswrapper[4813]: I0217 08:53:05.165357 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:53:05 crc kubenswrapper[4813]: I0217 08:53:05.165450 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:53:05 crc kubenswrapper[4813]: I0217 08:53:05.620522 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" Feb 17 08:53:05 crc kubenswrapper[4813]: I0217 08:53:05.720630 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45316cad-79c7-4ecb-801e-25dbf1c5d213-bundle\") pod \"45316cad-79c7-4ecb-801e-25dbf1c5d213\" (UID: \"45316cad-79c7-4ecb-801e-25dbf1c5d213\") " Feb 17 08:53:05 crc kubenswrapper[4813]: I0217 08:53:05.720801 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45316cad-79c7-4ecb-801e-25dbf1c5d213-util\") pod \"45316cad-79c7-4ecb-801e-25dbf1c5d213\" (UID: \"45316cad-79c7-4ecb-801e-25dbf1c5d213\") " Feb 17 08:53:05 crc kubenswrapper[4813]: I0217 08:53:05.720836 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmmk4\" (UniqueName: \"kubernetes.io/projected/45316cad-79c7-4ecb-801e-25dbf1c5d213-kube-api-access-lmmk4\") pod \"45316cad-79c7-4ecb-801e-25dbf1c5d213\" (UID: \"45316cad-79c7-4ecb-801e-25dbf1c5d213\") " Feb 17 08:53:05 crc kubenswrapper[4813]: I0217 08:53:05.721487 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45316cad-79c7-4ecb-801e-25dbf1c5d213-bundle" (OuterVolumeSpecName: "bundle") pod "45316cad-79c7-4ecb-801e-25dbf1c5d213" (UID: "45316cad-79c7-4ecb-801e-25dbf1c5d213"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:53:05 crc kubenswrapper[4813]: I0217 08:53:05.725531 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45316cad-79c7-4ecb-801e-25dbf1c5d213-kube-api-access-lmmk4" (OuterVolumeSpecName: "kube-api-access-lmmk4") pod "45316cad-79c7-4ecb-801e-25dbf1c5d213" (UID: "45316cad-79c7-4ecb-801e-25dbf1c5d213"). InnerVolumeSpecName "kube-api-access-lmmk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:53:05 crc kubenswrapper[4813]: I0217 08:53:05.735398 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45316cad-79c7-4ecb-801e-25dbf1c5d213-util" (OuterVolumeSpecName: "util") pod "45316cad-79c7-4ecb-801e-25dbf1c5d213" (UID: "45316cad-79c7-4ecb-801e-25dbf1c5d213"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:53:05 crc kubenswrapper[4813]: I0217 08:53:05.822677 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45316cad-79c7-4ecb-801e-25dbf1c5d213-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:53:05 crc kubenswrapper[4813]: I0217 08:53:05.822701 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45316cad-79c7-4ecb-801e-25dbf1c5d213-util\") on node \"crc\" DevicePath \"\"" Feb 17 08:53:05 crc kubenswrapper[4813]: I0217 08:53:05.822710 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmmk4\" (UniqueName: \"kubernetes.io/projected/45316cad-79c7-4ecb-801e-25dbf1c5d213-kube-api-access-lmmk4\") on node \"crc\" DevicePath \"\"" Feb 17 08:53:06 crc kubenswrapper[4813]: I0217 08:53:06.270638 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" event={"ID":"45316cad-79c7-4ecb-801e-25dbf1c5d213","Type":"ContainerDied","Data":"597ef080baf8a62a22d8c144a000b09ed34708a268c84513026005a7220253e7"} Feb 17 08:53:06 crc kubenswrapper[4813]: I0217 08:53:06.270700 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="597ef080baf8a62a22d8c144a000b09ed34708a268c84513026005a7220253e7" Feb 17 08:53:06 crc kubenswrapper[4813]: I0217 08:53:06.270806 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr" Feb 17 08:53:15 crc kubenswrapper[4813]: I0217 08:53:15.959023 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db"] Feb 17 08:53:15 crc kubenswrapper[4813]: E0217 08:53:15.959804 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45316cad-79c7-4ecb-801e-25dbf1c5d213" containerName="util" Feb 17 08:53:15 crc kubenswrapper[4813]: I0217 08:53:15.959819 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="45316cad-79c7-4ecb-801e-25dbf1c5d213" containerName="util" Feb 17 08:53:15 crc kubenswrapper[4813]: E0217 08:53:15.959834 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45316cad-79c7-4ecb-801e-25dbf1c5d213" containerName="extract" Feb 17 08:53:15 crc kubenswrapper[4813]: I0217 08:53:15.959841 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="45316cad-79c7-4ecb-801e-25dbf1c5d213" containerName="extract" Feb 17 08:53:15 crc kubenswrapper[4813]: E0217 08:53:15.959851 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45316cad-79c7-4ecb-801e-25dbf1c5d213" containerName="pull" Feb 17 08:53:15 crc kubenswrapper[4813]: I0217 08:53:15.959859 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="45316cad-79c7-4ecb-801e-25dbf1c5d213" containerName="pull" Feb 17 08:53:15 crc kubenswrapper[4813]: I0217 08:53:15.959990 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="45316cad-79c7-4ecb-801e-25dbf1c5d213" containerName="extract" Feb 17 08:53:15 crc kubenswrapper[4813]: I0217 08:53:15.960493 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db" Feb 17 08:53:15 crc kubenswrapper[4813]: I0217 08:53:15.962653 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 17 08:53:15 crc kubenswrapper[4813]: I0217 08:53:15.962693 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 17 08:53:15 crc kubenswrapper[4813]: I0217 08:53:15.963499 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xtz72" Feb 17 08:53:15 crc kubenswrapper[4813]: I0217 08:53:15.963612 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 17 08:53:15 crc kubenswrapper[4813]: I0217 08:53:15.963676 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 17 08:53:15 crc kubenswrapper[4813]: I0217 08:53:15.971553 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db"] Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.057425 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e852743f-2bfd-4b73-a5f9-2c56d356b99a-webhook-cert\") pod \"metallb-operator-controller-manager-5ff56d7b8b-xq8db\" (UID: \"e852743f-2bfd-4b73-a5f9-2c56d356b99a\") " pod="metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.057472 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8zsc\" (UniqueName: \"kubernetes.io/projected/e852743f-2bfd-4b73-a5f9-2c56d356b99a-kube-api-access-f8zsc\") pod \"metallb-operator-controller-manager-5ff56d7b8b-xq8db\" (UID: \"e852743f-2bfd-4b73-a5f9-2c56d356b99a\") " pod="metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.057779 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e852743f-2bfd-4b73-a5f9-2c56d356b99a-apiservice-cert\") pod \"metallb-operator-controller-manager-5ff56d7b8b-xq8db\" (UID: \"e852743f-2bfd-4b73-a5f9-2c56d356b99a\") " pod="metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.159393 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e852743f-2bfd-4b73-a5f9-2c56d356b99a-webhook-cert\") pod \"metallb-operator-controller-manager-5ff56d7b8b-xq8db\" (UID: \"e852743f-2bfd-4b73-a5f9-2c56d356b99a\") " pod="metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.159442 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8zsc\" (UniqueName: \"kubernetes.io/projected/e852743f-2bfd-4b73-a5f9-2c56d356b99a-kube-api-access-f8zsc\") pod \"metallb-operator-controller-manager-5ff56d7b8b-xq8db\" (UID: \"e852743f-2bfd-4b73-a5f9-2c56d356b99a\") " pod="metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.159506 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e852743f-2bfd-4b73-a5f9-2c56d356b99a-apiservice-cert\") pod \"metallb-operator-controller-manager-5ff56d7b8b-xq8db\" (UID: \"e852743f-2bfd-4b73-a5f9-2c56d356b99a\") " pod="metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.166043 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e852743f-2bfd-4b73-a5f9-2c56d356b99a-webhook-cert\") pod \"metallb-operator-controller-manager-5ff56d7b8b-xq8db\" (UID: \"e852743f-2bfd-4b73-a5f9-2c56d356b99a\") " pod="metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.167859 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e852743f-2bfd-4b73-a5f9-2c56d356b99a-apiservice-cert\") pod \"metallb-operator-controller-manager-5ff56d7b8b-xq8db\" (UID: \"e852743f-2bfd-4b73-a5f9-2c56d356b99a\") " pod="metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.175724 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8zsc\" (UniqueName: \"kubernetes.io/projected/e852743f-2bfd-4b73-a5f9-2c56d356b99a-kube-api-access-f8zsc\") pod \"metallb-operator-controller-manager-5ff56d7b8b-xq8db\" (UID: \"e852743f-2bfd-4b73-a5f9-2c56d356b99a\") " pod="metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.275552 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.325938 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-85f69475bd-296r2"] Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.326950 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-85f69475bd-296r2" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.335279 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.335328 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.335542 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-2d8gx" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.340115 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-85f69475bd-296r2"] Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.464856 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a20c117f-10e4-46ba-81cc-38c75428e6fc-apiservice-cert\") pod \"metallb-operator-webhook-server-85f69475bd-296r2\" (UID: \"a20c117f-10e4-46ba-81cc-38c75428e6fc\") " pod="metallb-system/metallb-operator-webhook-server-85f69475bd-296r2" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.465186 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h59t\" (UniqueName: \"kubernetes.io/projected/a20c117f-10e4-46ba-81cc-38c75428e6fc-kube-api-access-4h59t\") pod \"metallb-operator-webhook-server-85f69475bd-296r2\" (UID: \"a20c117f-10e4-46ba-81cc-38c75428e6fc\") " pod="metallb-system/metallb-operator-webhook-server-85f69475bd-296r2" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.465227 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a20c117f-10e4-46ba-81cc-38c75428e6fc-webhook-cert\") pod \"metallb-operator-webhook-server-85f69475bd-296r2\" (UID: \"a20c117f-10e4-46ba-81cc-38c75428e6fc\") " pod="metallb-system/metallb-operator-webhook-server-85f69475bd-296r2" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.566113 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a20c117f-10e4-46ba-81cc-38c75428e6fc-apiservice-cert\") pod \"metallb-operator-webhook-server-85f69475bd-296r2\" (UID: \"a20c117f-10e4-46ba-81cc-38c75428e6fc\") " pod="metallb-system/metallb-operator-webhook-server-85f69475bd-296r2" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.566191 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h59t\" (UniqueName: \"kubernetes.io/projected/a20c117f-10e4-46ba-81cc-38c75428e6fc-kube-api-access-4h59t\") pod \"metallb-operator-webhook-server-85f69475bd-296r2\" (UID: \"a20c117f-10e4-46ba-81cc-38c75428e6fc\") " pod="metallb-system/metallb-operator-webhook-server-85f69475bd-296r2" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.566228 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a20c117f-10e4-46ba-81cc-38c75428e6fc-webhook-cert\") pod \"metallb-operator-webhook-server-85f69475bd-296r2\" (UID: \"a20c117f-10e4-46ba-81cc-38c75428e6fc\") " pod="metallb-system/metallb-operator-webhook-server-85f69475bd-296r2" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.575403 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a20c117f-10e4-46ba-81cc-38c75428e6fc-webhook-cert\") pod \"metallb-operator-webhook-server-85f69475bd-296r2\" (UID: \"a20c117f-10e4-46ba-81cc-38c75428e6fc\") " pod="metallb-system/metallb-operator-webhook-server-85f69475bd-296r2" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.575783 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a20c117f-10e4-46ba-81cc-38c75428e6fc-apiservice-cert\") pod \"metallb-operator-webhook-server-85f69475bd-296r2\" (UID: \"a20c117f-10e4-46ba-81cc-38c75428e6fc\") " pod="metallb-system/metallb-operator-webhook-server-85f69475bd-296r2" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.588973 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h59t\" (UniqueName: \"kubernetes.io/projected/a20c117f-10e4-46ba-81cc-38c75428e6fc-kube-api-access-4h59t\") pod \"metallb-operator-webhook-server-85f69475bd-296r2\" (UID: \"a20c117f-10e4-46ba-81cc-38c75428e6fc\") " pod="metallb-system/metallb-operator-webhook-server-85f69475bd-296r2" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.674938 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-85f69475bd-296r2" Feb 17 08:53:16 crc kubenswrapper[4813]: I0217 08:53:16.853746 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db"] Feb 17 08:53:16 crc kubenswrapper[4813]: W0217 08:53:16.860133 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode852743f_2bfd_4b73_a5f9_2c56d356b99a.slice/crio-252860977cb343f4d681ec9f00382a9a4ac4f660b0dcab8fe4f3e5a7a0c24759 WatchSource:0}: Error finding container 252860977cb343f4d681ec9f00382a9a4ac4f660b0dcab8fe4f3e5a7a0c24759: Status 404 returned error can't find the container with id 252860977cb343f4d681ec9f00382a9a4ac4f660b0dcab8fe4f3e5a7a0c24759 Feb 17 08:53:17 crc kubenswrapper[4813]: I0217 08:53:17.104121 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-85f69475bd-296r2"] Feb 17 08:53:17 crc kubenswrapper[4813]: W0217 08:53:17.107274 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda20c117f_10e4_46ba_81cc_38c75428e6fc.slice/crio-b0710b522297b7d4b1652edd19961135fd6f1206f679d58a4b9b8b96fa412130 WatchSource:0}: Error finding container b0710b522297b7d4b1652edd19961135fd6f1206f679d58a4b9b8b96fa412130: Status 404 returned error can't find the container with id b0710b522297b7d4b1652edd19961135fd6f1206f679d58a4b9b8b96fa412130 Feb 17 08:53:17 crc kubenswrapper[4813]: I0217 08:53:17.355298 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-85f69475bd-296r2" event={"ID":"a20c117f-10e4-46ba-81cc-38c75428e6fc","Type":"ContainerStarted","Data":"b0710b522297b7d4b1652edd19961135fd6f1206f679d58a4b9b8b96fa412130"} Feb 17 08:53:17 crc kubenswrapper[4813]: I0217 08:53:17.359024 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db" event={"ID":"e852743f-2bfd-4b73-a5f9-2c56d356b99a","Type":"ContainerStarted","Data":"252860977cb343f4d681ec9f00382a9a4ac4f660b0dcab8fe4f3e5a7a0c24759"} Feb 17 08:53:21 crc kubenswrapper[4813]: I0217 08:53:21.385800 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db" event={"ID":"e852743f-2bfd-4b73-a5f9-2c56d356b99a","Type":"ContainerStarted","Data":"a6fddb45bf636336533c4216878d8fa4c2f33db7e04bf1d338438e3570ed2405"} Feb 17 08:53:21 crc kubenswrapper[4813]: I0217 08:53:21.386451 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db" Feb 17 08:53:21 crc kubenswrapper[4813]: I0217 08:53:21.388683 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-85f69475bd-296r2" event={"ID":"a20c117f-10e4-46ba-81cc-38c75428e6fc","Type":"ContainerStarted","Data":"cac07d735e3e87dc2dac4b8499e992ca5e0e4bedb326084353c3c8fce68767d6"} Feb 17 08:53:21 crc kubenswrapper[4813]: I0217 08:53:21.388865 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-85f69475bd-296r2" Feb 17 08:53:21 crc kubenswrapper[4813]: I0217 08:53:21.414513 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db" podStartSLOduration=4.010212698 podStartE2EDuration="6.414488961s" podCreationTimestamp="2026-02-17 08:53:15 +0000 UTC" firstStartedPulling="2026-02-17 08:53:16.862457434 +0000 UTC m=+744.523218657" lastFinishedPulling="2026-02-17 08:53:19.266733697 +0000 UTC m=+746.927494920" observedRunningTime="2026-02-17 08:53:21.414228343 +0000 UTC m=+749.074989586" watchObservedRunningTime="2026-02-17 08:53:21.414488961 +0000 UTC m=+749.075250214" Feb 17 08:53:21 crc kubenswrapper[4813]: I0217 08:53:21.442696 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-85f69475bd-296r2" podStartSLOduration=1.62905591 podStartE2EDuration="5.442673748s" podCreationTimestamp="2026-02-17 08:53:16 +0000 UTC" firstStartedPulling="2026-02-17 08:53:17.110286508 +0000 UTC m=+744.771047731" lastFinishedPulling="2026-02-17 08:53:20.923904336 +0000 UTC m=+748.584665569" observedRunningTime="2026-02-17 08:53:21.440849457 +0000 UTC m=+749.101610720" watchObservedRunningTime="2026-02-17 08:53:21.442673748 +0000 UTC m=+749.103434981" Feb 17 08:53:26 crc kubenswrapper[4813]: I0217 08:53:26.467916 4813 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 08:53:35 crc kubenswrapper[4813]: I0217 08:53:35.166213 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:53:35 crc kubenswrapper[4813]: I0217 08:53:35.166986 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:53:35 crc kubenswrapper[4813]: I0217 08:53:35.167050 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:53:35 crc kubenswrapper[4813]: I0217 08:53:35.167923 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7375dc71231db7ccf7ec9a93ed4b7c58981e373ed44891ec0dfde219ffc963ad"} pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 08:53:35 crc kubenswrapper[4813]: I0217 08:53:35.168031 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" containerID="cri-o://7375dc71231db7ccf7ec9a93ed4b7c58981e373ed44891ec0dfde219ffc963ad" gracePeriod=600 Feb 17 08:53:35 crc kubenswrapper[4813]: I0217 08:53:35.478066 4813 generic.go:334] "Generic (PLEG): container finished" podID="3a6ba827-b08b-4163-b067-d9adb119398d" containerID="7375dc71231db7ccf7ec9a93ed4b7c58981e373ed44891ec0dfde219ffc963ad" exitCode=0 Feb 17 08:53:35 crc kubenswrapper[4813]: I0217 08:53:35.478116 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerDied","Data":"7375dc71231db7ccf7ec9a93ed4b7c58981e373ed44891ec0dfde219ffc963ad"} Feb 17 08:53:35 crc kubenswrapper[4813]: I0217 08:53:35.478422 4813 scope.go:117] "RemoveContainer" containerID="b435a016b264f7638ac4f0875359992eccdfebd43455e04664f08b4b9bf401cf" Feb 17 08:53:36 crc kubenswrapper[4813]: I0217 08:53:36.486180 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerStarted","Data":"7212b4b532e53d852c5e6fbe5aa59b96599c01899ec81be229b35b10904557df"} Feb 17 08:53:36 crc kubenswrapper[4813]: I0217 08:53:36.681358 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-85f69475bd-296r2" Feb 17 08:53:56 crc kubenswrapper[4813]: I0217 08:53:56.280564 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5ff56d7b8b-xq8db" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.059044 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-pkkfc"] Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.062291 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.063993 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-bgwdt" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.064217 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.064551 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.077080 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-kn99d"] Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.078005 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kn99d" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.080290 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.096287 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-kn99d"] Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.143666 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-vnsgt"] Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.144512 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vnsgt" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.147696 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-frr-conf\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.147817 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-reloader\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.147855 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-metrics\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.147884 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-frr-startup\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.147905 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-frr-sockets\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.148019 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-metrics-certs\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.148113 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf75r\" (UniqueName: \"kubernetes.io/projected/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-kube-api-access-bf75r\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.149559 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.149661 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.149810 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.149871 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ld9s5" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.165633 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-rsh7g"] Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.166663 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-rsh7g" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.170768 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.182563 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-rsh7g"] Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.248893 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-metrics-certs\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.248938 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c190f15b-e4c6-4fef-9857-654242e7512f-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-kn99d\" (UID: \"c190f15b-e4c6-4fef-9857-654242e7512f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kn99d" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.248976 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf75r\" (UniqueName: \"kubernetes.io/projected/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-kube-api-access-bf75r\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.249011 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pngrh\" (UniqueName: \"kubernetes.io/projected/c190f15b-e4c6-4fef-9857-654242e7512f-kube-api-access-pngrh\") pod \"frr-k8s-webhook-server-78b44bf5bb-kn99d\" (UID: \"c190f15b-e4c6-4fef-9857-654242e7512f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kn99d" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.249035 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-frr-conf\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.249050 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4mj2\" (UniqueName: \"kubernetes.io/projected/d252311d-9fc5-4ecf-83de-51b9d7d371bb-kube-api-access-v4mj2\") pod \"speaker-vnsgt\" (UID: \"d252311d-9fc5-4ecf-83de-51b9d7d371bb\") " pod="metallb-system/speaker-vnsgt" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.249073 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d252311d-9fc5-4ecf-83de-51b9d7d371bb-memberlist\") pod \"speaker-vnsgt\" (UID: \"d252311d-9fc5-4ecf-83de-51b9d7d371bb\") " pod="metallb-system/speaker-vnsgt" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.249091 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d252311d-9fc5-4ecf-83de-51b9d7d371bb-metallb-excludel2\") pod \"speaker-vnsgt\" (UID: \"d252311d-9fc5-4ecf-83de-51b9d7d371bb\") " pod="metallb-system/speaker-vnsgt" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.249108 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d252311d-9fc5-4ecf-83de-51b9d7d371bb-metrics-certs\") pod \"speaker-vnsgt\" (UID: \"d252311d-9fc5-4ecf-83de-51b9d7d371bb\") " pod="metallb-system/speaker-vnsgt" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.249128 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-reloader\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.249146 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-metrics\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.249160 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/947d8da1-a53f-43bf-b2b4-742ec2777803-metrics-certs\") pod \"controller-69bbfbf88f-rsh7g\" (UID: \"947d8da1-a53f-43bf-b2b4-742ec2777803\") " pod="metallb-system/controller-69bbfbf88f-rsh7g" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.249178 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-frr-startup\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.249193 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-frr-sockets\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.249208 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/947d8da1-a53f-43bf-b2b4-742ec2777803-cert\") pod \"controller-69bbfbf88f-rsh7g\" (UID: \"947d8da1-a53f-43bf-b2b4-742ec2777803\") " pod="metallb-system/controller-69bbfbf88f-rsh7g" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.249225 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnsdd\" (UniqueName: \"kubernetes.io/projected/947d8da1-a53f-43bf-b2b4-742ec2777803-kube-api-access-fnsdd\") pod \"controller-69bbfbf88f-rsh7g\" (UID: \"947d8da1-a53f-43bf-b2b4-742ec2777803\") " pod="metallb-system/controller-69bbfbf88f-rsh7g" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.249878 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-frr-conf\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.250075 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-reloader\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.250139 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-metrics\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.250240 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-frr-sockets\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.250892 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-frr-startup\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.263034 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-metrics-certs\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.265871 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf75r\" (UniqueName: \"kubernetes.io/projected/6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb-kube-api-access-bf75r\") pod \"frr-k8s-pkkfc\" (UID: \"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb\") " pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.350644 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pngrh\" (UniqueName: \"kubernetes.io/projected/c190f15b-e4c6-4fef-9857-654242e7512f-kube-api-access-pngrh\") pod \"frr-k8s-webhook-server-78b44bf5bb-kn99d\" (UID: \"c190f15b-e4c6-4fef-9857-654242e7512f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kn99d" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.350698 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4mj2\" (UniqueName: \"kubernetes.io/projected/d252311d-9fc5-4ecf-83de-51b9d7d371bb-kube-api-access-v4mj2\") pod \"speaker-vnsgt\" (UID: \"d252311d-9fc5-4ecf-83de-51b9d7d371bb\") " pod="metallb-system/speaker-vnsgt" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.350726 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d252311d-9fc5-4ecf-83de-51b9d7d371bb-memberlist\") pod \"speaker-vnsgt\" (UID: \"d252311d-9fc5-4ecf-83de-51b9d7d371bb\") " pod="metallb-system/speaker-vnsgt" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.350746 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d252311d-9fc5-4ecf-83de-51b9d7d371bb-metallb-excludel2\") pod \"speaker-vnsgt\" (UID: \"d252311d-9fc5-4ecf-83de-51b9d7d371bb\") " pod="metallb-system/speaker-vnsgt" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.350761 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d252311d-9fc5-4ecf-83de-51b9d7d371bb-metrics-certs\") pod \"speaker-vnsgt\" (UID: \"d252311d-9fc5-4ecf-83de-51b9d7d371bb\") " pod="metallb-system/speaker-vnsgt" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.350783 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/947d8da1-a53f-43bf-b2b4-742ec2777803-metrics-certs\") pod \"controller-69bbfbf88f-rsh7g\" (UID: \"947d8da1-a53f-43bf-b2b4-742ec2777803\") " pod="metallb-system/controller-69bbfbf88f-rsh7g" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.350803 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/947d8da1-a53f-43bf-b2b4-742ec2777803-cert\") pod \"controller-69bbfbf88f-rsh7g\" (UID: \"947d8da1-a53f-43bf-b2b4-742ec2777803\") " pod="metallb-system/controller-69bbfbf88f-rsh7g" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.350820 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnsdd\" (UniqueName: \"kubernetes.io/projected/947d8da1-a53f-43bf-b2b4-742ec2777803-kube-api-access-fnsdd\") pod \"controller-69bbfbf88f-rsh7g\" (UID: \"947d8da1-a53f-43bf-b2b4-742ec2777803\") " pod="metallb-system/controller-69bbfbf88f-rsh7g" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.350846 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c190f15b-e4c6-4fef-9857-654242e7512f-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-kn99d\" (UID: \"c190f15b-e4c6-4fef-9857-654242e7512f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kn99d" Feb 17 08:53:57 crc kubenswrapper[4813]: E0217 08:53:57.351752 4813 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 08:53:57 crc kubenswrapper[4813]: E0217 08:53:57.351818 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d252311d-9fc5-4ecf-83de-51b9d7d371bb-memberlist podName:d252311d-9fc5-4ecf-83de-51b9d7d371bb nodeName:}" failed. No retries permitted until 2026-02-17 08:53:57.851800377 +0000 UTC m=+785.512561600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d252311d-9fc5-4ecf-83de-51b9d7d371bb-memberlist") pod "speaker-vnsgt" (UID: "d252311d-9fc5-4ecf-83de-51b9d7d371bb") : secret "metallb-memberlist" not found Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.352632 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d252311d-9fc5-4ecf-83de-51b9d7d371bb-metallb-excludel2\") pod \"speaker-vnsgt\" (UID: \"d252311d-9fc5-4ecf-83de-51b9d7d371bb\") " pod="metallb-system/speaker-vnsgt" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.353080 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.358008 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c190f15b-e4c6-4fef-9857-654242e7512f-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-kn99d\" (UID: \"c190f15b-e4c6-4fef-9857-654242e7512f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kn99d" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.358045 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d252311d-9fc5-4ecf-83de-51b9d7d371bb-metrics-certs\") pod \"speaker-vnsgt\" (UID: \"d252311d-9fc5-4ecf-83de-51b9d7d371bb\") " pod="metallb-system/speaker-vnsgt" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.358099 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/947d8da1-a53f-43bf-b2b4-742ec2777803-metrics-certs\") pod \"controller-69bbfbf88f-rsh7g\" (UID: \"947d8da1-a53f-43bf-b2b4-742ec2777803\") " pod="metallb-system/controller-69bbfbf88f-rsh7g" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.369698 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/947d8da1-a53f-43bf-b2b4-742ec2777803-cert\") pod \"controller-69bbfbf88f-rsh7g\" (UID: \"947d8da1-a53f-43bf-b2b4-742ec2777803\") " pod="metallb-system/controller-69bbfbf88f-rsh7g" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.371846 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4mj2\" (UniqueName: \"kubernetes.io/projected/d252311d-9fc5-4ecf-83de-51b9d7d371bb-kube-api-access-v4mj2\") pod \"speaker-vnsgt\" (UID: \"d252311d-9fc5-4ecf-83de-51b9d7d371bb\") " pod="metallb-system/speaker-vnsgt" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.379256 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pngrh\" (UniqueName: \"kubernetes.io/projected/c190f15b-e4c6-4fef-9857-654242e7512f-kube-api-access-pngrh\") pod \"frr-k8s-webhook-server-78b44bf5bb-kn99d\" (UID: \"c190f15b-e4c6-4fef-9857-654242e7512f\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kn99d" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.379872 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnsdd\" (UniqueName: \"kubernetes.io/projected/947d8da1-a53f-43bf-b2b4-742ec2777803-kube-api-access-fnsdd\") pod \"controller-69bbfbf88f-rsh7g\" (UID: \"947d8da1-a53f-43bf-b2b4-742ec2777803\") " pod="metallb-system/controller-69bbfbf88f-rsh7g" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.380084 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.398352 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kn99d" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.483332 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-rsh7g" Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.615891 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-kn99d"] Feb 17 08:53:57 crc kubenswrapper[4813]: W0217 08:53:57.620550 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc190f15b_e4c6_4fef_9857_654242e7512f.slice/crio-8852441b4e4c6cd75b450374298b097dfe00f405e51816636f3e72abebab823e WatchSource:0}: Error finding container 8852441b4e4c6cd75b450374298b097dfe00f405e51816636f3e72abebab823e: Status 404 returned error can't find the container with id 8852441b4e4c6cd75b450374298b097dfe00f405e51816636f3e72abebab823e Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.645951 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kn99d" event={"ID":"c190f15b-e4c6-4fef-9857-654242e7512f","Type":"ContainerStarted","Data":"8852441b4e4c6cd75b450374298b097dfe00f405e51816636f3e72abebab823e"} Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.646983 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pkkfc" event={"ID":"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb","Type":"ContainerStarted","Data":"6645371fd1f678cc97e8b39cbae2f4cff3e7e09dc73708063081f1cff3c570fe"} Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.673539 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-rsh7g"] Feb 17 08:53:57 crc kubenswrapper[4813]: W0217 08:53:57.676572 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod947d8da1_a53f_43bf_b2b4_742ec2777803.slice/crio-4a839beffa94c3168f6ae468c5d13b6917fff0464fd3ebbb7abe4b1cbe431f2e WatchSource:0}: Error finding container 4a839beffa94c3168f6ae468c5d13b6917fff0464fd3ebbb7abe4b1cbe431f2e: Status 404 returned error can't find the container with id 4a839beffa94c3168f6ae468c5d13b6917fff0464fd3ebbb7abe4b1cbe431f2e Feb 17 08:53:57 crc kubenswrapper[4813]: I0217 08:53:57.858474 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d252311d-9fc5-4ecf-83de-51b9d7d371bb-memberlist\") pod \"speaker-vnsgt\" (UID: \"d252311d-9fc5-4ecf-83de-51b9d7d371bb\") " pod="metallb-system/speaker-vnsgt" Feb 17 08:53:57 crc kubenswrapper[4813]: E0217 08:53:57.858771 4813 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 08:53:57 crc kubenswrapper[4813]: E0217 08:53:57.859009 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d252311d-9fc5-4ecf-83de-51b9d7d371bb-memberlist podName:d252311d-9fc5-4ecf-83de-51b9d7d371bb nodeName:}" failed. No retries permitted until 2026-02-17 08:53:58.858981081 +0000 UTC m=+786.519742324 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d252311d-9fc5-4ecf-83de-51b9d7d371bb-memberlist") pod "speaker-vnsgt" (UID: "d252311d-9fc5-4ecf-83de-51b9d7d371bb") : secret "metallb-memberlist" not found Feb 17 08:53:58 crc kubenswrapper[4813]: I0217 08:53:58.670670 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-rsh7g" event={"ID":"947d8da1-a53f-43bf-b2b4-742ec2777803","Type":"ContainerStarted","Data":"0dd642833f58885d44fbcbc46c1f3953068404387be502f9054eaa2e46d200d5"} Feb 17 08:53:58 crc kubenswrapper[4813]: I0217 08:53:58.670713 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-rsh7g" event={"ID":"947d8da1-a53f-43bf-b2b4-742ec2777803","Type":"ContainerStarted","Data":"4b93cf962cb5041a7cd8f2aa3246ba13033fb777e0bf4417f99390fa05d4d020"} Feb 17 08:53:58 crc kubenswrapper[4813]: I0217 08:53:58.670723 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-rsh7g" event={"ID":"947d8da1-a53f-43bf-b2b4-742ec2777803","Type":"ContainerStarted","Data":"4a839beffa94c3168f6ae468c5d13b6917fff0464fd3ebbb7abe4b1cbe431f2e"} Feb 17 08:53:58 crc kubenswrapper[4813]: I0217 08:53:58.671161 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-rsh7g" Feb 17 08:53:58 crc kubenswrapper[4813]: I0217 08:53:58.692964 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-rsh7g" podStartSLOduration=1.692939185 podStartE2EDuration="1.692939185s" podCreationTimestamp="2026-02-17 08:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:53:58.686772819 +0000 UTC m=+786.347534072" watchObservedRunningTime="2026-02-17 08:53:58.692939185 +0000 UTC m=+786.353700428" Feb 17 08:53:58 crc kubenswrapper[4813]: I0217 08:53:58.871422 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d252311d-9fc5-4ecf-83de-51b9d7d371bb-memberlist\") pod \"speaker-vnsgt\" (UID: \"d252311d-9fc5-4ecf-83de-51b9d7d371bb\") " pod="metallb-system/speaker-vnsgt" Feb 17 08:53:58 crc kubenswrapper[4813]: I0217 08:53:58.893576 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d252311d-9fc5-4ecf-83de-51b9d7d371bb-memberlist\") pod \"speaker-vnsgt\" (UID: \"d252311d-9fc5-4ecf-83de-51b9d7d371bb\") " pod="metallb-system/speaker-vnsgt" Feb 17 08:53:58 crc kubenswrapper[4813]: I0217 08:53:58.956342 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-vnsgt" Feb 17 08:53:59 crc kubenswrapper[4813]: I0217 08:53:59.687716 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vnsgt" event={"ID":"d252311d-9fc5-4ecf-83de-51b9d7d371bb","Type":"ContainerStarted","Data":"fd8c16d9b10733530c2ce5fd230d7829b5fc9862905d3bb77062316ee19df25b"} Feb 17 08:53:59 crc kubenswrapper[4813]: I0217 08:53:59.687949 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vnsgt" event={"ID":"d252311d-9fc5-4ecf-83de-51b9d7d371bb","Type":"ContainerStarted","Data":"dcde708fcabe5a97ab89fb70ca43b81acc19b999249e35617dc3bffb229db2f9"} Feb 17 08:53:59 crc kubenswrapper[4813]: I0217 08:53:59.687960 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-vnsgt" event={"ID":"d252311d-9fc5-4ecf-83de-51b9d7d371bb","Type":"ContainerStarted","Data":"b4800627fae477e1f69f6cc6061b03169e63d237f7219f32d3d385b930940446"} Feb 17 08:53:59 crc kubenswrapper[4813]: I0217 08:53:59.688480 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-vnsgt" Feb 17 08:53:59 crc kubenswrapper[4813]: I0217 08:53:59.712754 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-vnsgt" podStartSLOduration=2.712739331 podStartE2EDuration="2.712739331s" podCreationTimestamp="2026-02-17 08:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:53:59.709237791 +0000 UTC m=+787.369999014" watchObservedRunningTime="2026-02-17 08:53:59.712739331 +0000 UTC m=+787.373500554" Feb 17 08:54:04 crc kubenswrapper[4813]: I0217 08:54:04.719265 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kn99d" event={"ID":"c190f15b-e4c6-4fef-9857-654242e7512f","Type":"ContainerStarted","Data":"8c52af8de1d61244ca04818c59ec8edd530e0a14e41744f62eea8ded0d05f49e"} Feb 17 08:54:04 crc kubenswrapper[4813]: I0217 08:54:04.721334 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kn99d" Feb 17 08:54:04 crc kubenswrapper[4813]: I0217 08:54:04.721500 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pkkfc" event={"ID":"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb","Type":"ContainerDied","Data":"a0ccc21f447531767ea9135ccf63168e51fbefab4532007a379d191993128f69"} Feb 17 08:54:04 crc kubenswrapper[4813]: I0217 08:54:04.721417 4813 generic.go:334] "Generic (PLEG): container finished" podID="6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb" containerID="a0ccc21f447531767ea9135ccf63168e51fbefab4532007a379d191993128f69" exitCode=0 Feb 17 08:54:04 crc kubenswrapper[4813]: I0217 08:54:04.740325 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kn99d" podStartSLOduration=1.326194796 podStartE2EDuration="7.740290995s" podCreationTimestamp="2026-02-17 08:53:57 +0000 UTC" firstStartedPulling="2026-02-17 08:53:57.622494707 +0000 UTC m=+785.283255930" lastFinishedPulling="2026-02-17 08:54:04.036590896 +0000 UTC m=+791.697352129" observedRunningTime="2026-02-17 08:54:04.739129882 +0000 UTC m=+792.399891125" watchObservedRunningTime="2026-02-17 08:54:04.740290995 +0000 UTC m=+792.401052218" Feb 17 08:54:05 crc kubenswrapper[4813]: I0217 08:54:05.733830 4813 generic.go:334] "Generic (PLEG): container finished" podID="6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb" containerID="0ed76c19fdd2f6e9eeefa62a3184e93ffcfd091712cd30459d91614b373e91ff" exitCode=0 Feb 17 08:54:05 crc kubenswrapper[4813]: I0217 08:54:05.733909 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pkkfc" event={"ID":"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb","Type":"ContainerDied","Data":"0ed76c19fdd2f6e9eeefa62a3184e93ffcfd091712cd30459d91614b373e91ff"} Feb 17 08:54:06 crc kubenswrapper[4813]: I0217 08:54:06.742905 4813 generic.go:334] "Generic (PLEG): container finished" podID="6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb" containerID="ede6cf31a08843cbf223c6e09c8ea282037b4e9f68e8bd4f8a4787ce6d3b94d6" exitCode=0 Feb 17 08:54:06 crc kubenswrapper[4813]: I0217 08:54:06.743017 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pkkfc" event={"ID":"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb","Type":"ContainerDied","Data":"ede6cf31a08843cbf223c6e09c8ea282037b4e9f68e8bd4f8a4787ce6d3b94d6"} Feb 17 08:54:07 crc kubenswrapper[4813]: I0217 08:54:07.492721 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-rsh7g" Feb 17 08:54:07 crc kubenswrapper[4813]: I0217 08:54:07.755888 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pkkfc" event={"ID":"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb","Type":"ContainerStarted","Data":"ce01a2b8c2d80a0536ec8da4c0d27437c45fbb2cd31ada05980ac521e0b0cda0"} Feb 17 08:54:07 crc kubenswrapper[4813]: I0217 08:54:07.755928 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pkkfc" event={"ID":"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb","Type":"ContainerStarted","Data":"41ec96359e49b95b5dd195b96a47a281c6f05bfc1e0833bbe1f4ec4e214f6c75"} Feb 17 08:54:07 crc kubenswrapper[4813]: I0217 08:54:07.755937 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pkkfc" event={"ID":"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb","Type":"ContainerStarted","Data":"67844f85d85fc0e56c7f5627a004aa9477f16467150859855f631fd63260617b"} Feb 17 08:54:07 crc kubenswrapper[4813]: I0217 08:54:07.755945 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pkkfc" event={"ID":"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb","Type":"ContainerStarted","Data":"63c8fb992172e97d738a563f6a7d5ae61c1542e014702d4cf6b37da2624b5dd0"} Feb 17 08:54:07 crc kubenswrapper[4813]: I0217 08:54:07.755954 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pkkfc" event={"ID":"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb","Type":"ContainerStarted","Data":"49596ff0a42ae6241ae9835b0691dff6572906662d4e7c70eb1aa5f2fdbc2dd4"} Feb 17 08:54:08 crc kubenswrapper[4813]: I0217 08:54:08.768983 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pkkfc" event={"ID":"6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb","Type":"ContainerStarted","Data":"cb82a0ec61f51690ac349c8e6f354ece1dca8d762c7b372212156f41a8b078b2"} Feb 17 08:54:08 crc kubenswrapper[4813]: I0217 08:54:08.769540 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:54:08 crc kubenswrapper[4813]: I0217 08:54:08.794683 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-pkkfc" podStartSLOduration=5.246642435 podStartE2EDuration="11.794655732s" podCreationTimestamp="2026-02-17 08:53:57 +0000 UTC" firstStartedPulling="2026-02-17 08:53:57.493445147 +0000 UTC m=+785.154206380" lastFinishedPulling="2026-02-17 08:54:04.041458444 +0000 UTC m=+791.702219677" observedRunningTime="2026-02-17 08:54:08.789067053 +0000 UTC m=+796.449828306" watchObservedRunningTime="2026-02-17 08:54:08.794655732 +0000 UTC m=+796.455416995" Feb 17 08:54:12 crc kubenswrapper[4813]: I0217 08:54:12.381203 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:54:12 crc kubenswrapper[4813]: I0217 08:54:12.451381 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:54:17 crc kubenswrapper[4813]: I0217 08:54:17.382564 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-pkkfc" Feb 17 08:54:17 crc kubenswrapper[4813]: I0217 08:54:17.408203 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-kn99d" Feb 17 08:54:18 crc kubenswrapper[4813]: I0217 08:54:18.959359 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-vnsgt" Feb 17 08:54:21 crc kubenswrapper[4813]: I0217 08:54:21.265165 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx"] Feb 17 08:54:21 crc kubenswrapper[4813]: I0217 08:54:21.267359 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" Feb 17 08:54:21 crc kubenswrapper[4813]: I0217 08:54:21.270113 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 08:54:21 crc kubenswrapper[4813]: I0217 08:54:21.281735 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx"] Feb 17 08:54:21 crc kubenswrapper[4813]: I0217 08:54:21.436836 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63d43bda-eb95-4074-8415-8e7196bd950e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx\" (UID: \"63d43bda-eb95-4074-8415-8e7196bd950e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" Feb 17 08:54:21 crc kubenswrapper[4813]: I0217 08:54:21.436948 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2mmg\" (UniqueName: \"kubernetes.io/projected/63d43bda-eb95-4074-8415-8e7196bd950e-kube-api-access-k2mmg\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx\" (UID: \"63d43bda-eb95-4074-8415-8e7196bd950e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" Feb 17 08:54:21 crc kubenswrapper[4813]: I0217 08:54:21.437015 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63d43bda-eb95-4074-8415-8e7196bd950e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx\" (UID: \"63d43bda-eb95-4074-8415-8e7196bd950e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" Feb 17 08:54:21 crc kubenswrapper[4813]: I0217 08:54:21.537944 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63d43bda-eb95-4074-8415-8e7196bd950e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx\" (UID: \"63d43bda-eb95-4074-8415-8e7196bd950e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" Feb 17 08:54:21 crc kubenswrapper[4813]: I0217 08:54:21.538036 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63d43bda-eb95-4074-8415-8e7196bd950e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx\" (UID: \"63d43bda-eb95-4074-8415-8e7196bd950e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" Feb 17 08:54:21 crc kubenswrapper[4813]: I0217 08:54:21.538089 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2mmg\" (UniqueName: \"kubernetes.io/projected/63d43bda-eb95-4074-8415-8e7196bd950e-kube-api-access-k2mmg\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx\" (UID: \"63d43bda-eb95-4074-8415-8e7196bd950e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" Feb 17 08:54:21 crc kubenswrapper[4813]: I0217 08:54:21.538650 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63d43bda-eb95-4074-8415-8e7196bd950e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx\" (UID: \"63d43bda-eb95-4074-8415-8e7196bd950e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" Feb 17 08:54:21 crc kubenswrapper[4813]: I0217 08:54:21.538664 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63d43bda-eb95-4074-8415-8e7196bd950e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx\" (UID: \"63d43bda-eb95-4074-8415-8e7196bd950e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" Feb 17 08:54:21 crc kubenswrapper[4813]: I0217 08:54:21.578956 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2mmg\" (UniqueName: \"kubernetes.io/projected/63d43bda-eb95-4074-8415-8e7196bd950e-kube-api-access-k2mmg\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx\" (UID: \"63d43bda-eb95-4074-8415-8e7196bd950e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" Feb 17 08:54:21 crc kubenswrapper[4813]: I0217 08:54:21.582900 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" Feb 17 08:54:21 crc kubenswrapper[4813]: I0217 08:54:21.998981 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx"] Feb 17 08:54:22 crc kubenswrapper[4813]: W0217 08:54:22.009173 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63d43bda_eb95_4074_8415_8e7196bd950e.slice/crio-4a6f550e83dfc3f501d885b0ffe01aa54627c4965b6b9aad6722591fa94f91b1 WatchSource:0}: Error finding container 4a6f550e83dfc3f501d885b0ffe01aa54627c4965b6b9aad6722591fa94f91b1: Status 404 returned error can't find the container with id 4a6f550e83dfc3f501d885b0ffe01aa54627c4965b6b9aad6722591fa94f91b1 Feb 17 08:54:22 crc kubenswrapper[4813]: I0217 08:54:22.879914 4813 generic.go:334] "Generic (PLEG): container finished" podID="63d43bda-eb95-4074-8415-8e7196bd950e" containerID="cf0a6087c5fb2c521d121eb535e39bfedab4c8a664b031cb633ecf8e3a8541e3" exitCode=0 Feb 17 08:54:22 crc kubenswrapper[4813]: I0217 08:54:22.880028 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" event={"ID":"63d43bda-eb95-4074-8415-8e7196bd950e","Type":"ContainerDied","Data":"cf0a6087c5fb2c521d121eb535e39bfedab4c8a664b031cb633ecf8e3a8541e3"} Feb 17 08:54:22 crc kubenswrapper[4813]: I0217 08:54:22.880196 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" event={"ID":"63d43bda-eb95-4074-8415-8e7196bd950e","Type":"ContainerStarted","Data":"4a6f550e83dfc3f501d885b0ffe01aa54627c4965b6b9aad6722591fa94f91b1"} Feb 17 08:54:24 crc kubenswrapper[4813]: I0217 08:54:24.811939 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hxrlr"] Feb 17 08:54:24 crc kubenswrapper[4813]: I0217 08:54:24.813640 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:24 crc kubenswrapper[4813]: I0217 08:54:24.827636 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxrlr"] Feb 17 08:54:24 crc kubenswrapper[4813]: I0217 08:54:24.989172 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4544ab46-43e8-4a7a-99e2-085ad050ee79-utilities\") pod \"redhat-operators-hxrlr\" (UID: \"4544ab46-43e8-4a7a-99e2-085ad050ee79\") " pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:24 crc kubenswrapper[4813]: I0217 08:54:24.989216 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4544ab46-43e8-4a7a-99e2-085ad050ee79-catalog-content\") pod \"redhat-operators-hxrlr\" (UID: \"4544ab46-43e8-4a7a-99e2-085ad050ee79\") " pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:24 crc kubenswrapper[4813]: I0217 08:54:24.989275 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpg82\" (UniqueName: \"kubernetes.io/projected/4544ab46-43e8-4a7a-99e2-085ad050ee79-kube-api-access-zpg82\") pod \"redhat-operators-hxrlr\" (UID: \"4544ab46-43e8-4a7a-99e2-085ad050ee79\") " pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:25 crc kubenswrapper[4813]: I0217 08:54:25.089973 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpg82\" (UniqueName: \"kubernetes.io/projected/4544ab46-43e8-4a7a-99e2-085ad050ee79-kube-api-access-zpg82\") pod \"redhat-operators-hxrlr\" (UID: \"4544ab46-43e8-4a7a-99e2-085ad050ee79\") " pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:25 crc kubenswrapper[4813]: I0217 08:54:25.090081 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4544ab46-43e8-4a7a-99e2-085ad050ee79-utilities\") pod \"redhat-operators-hxrlr\" (UID: \"4544ab46-43e8-4a7a-99e2-085ad050ee79\") " pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:25 crc kubenswrapper[4813]: I0217 08:54:25.090107 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4544ab46-43e8-4a7a-99e2-085ad050ee79-catalog-content\") pod \"redhat-operators-hxrlr\" (UID: \"4544ab46-43e8-4a7a-99e2-085ad050ee79\") " pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:25 crc kubenswrapper[4813]: I0217 08:54:25.090483 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4544ab46-43e8-4a7a-99e2-085ad050ee79-utilities\") pod \"redhat-operators-hxrlr\" (UID: \"4544ab46-43e8-4a7a-99e2-085ad050ee79\") " pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:25 crc kubenswrapper[4813]: I0217 08:54:25.090568 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4544ab46-43e8-4a7a-99e2-085ad050ee79-catalog-content\") pod \"redhat-operators-hxrlr\" (UID: \"4544ab46-43e8-4a7a-99e2-085ad050ee79\") " pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:25 crc kubenswrapper[4813]: I0217 08:54:25.110429 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpg82\" (UniqueName: \"kubernetes.io/projected/4544ab46-43e8-4a7a-99e2-085ad050ee79-kube-api-access-zpg82\") pod \"redhat-operators-hxrlr\" (UID: \"4544ab46-43e8-4a7a-99e2-085ad050ee79\") " pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:25 crc kubenswrapper[4813]: I0217 08:54:25.175961 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:26 crc kubenswrapper[4813]: I0217 08:54:26.646211 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hxrlr"] Feb 17 08:54:26 crc kubenswrapper[4813]: W0217 08:54:26.652902 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4544ab46_43e8_4a7a_99e2_085ad050ee79.slice/crio-d39d82acefa42c834df6e1651fbb6c36896efd5453dc7a310e884f7df67cfecd WatchSource:0}: Error finding container d39d82acefa42c834df6e1651fbb6c36896efd5453dc7a310e884f7df67cfecd: Status 404 returned error can't find the container with id d39d82acefa42c834df6e1651fbb6c36896efd5453dc7a310e884f7df67cfecd Feb 17 08:54:26 crc kubenswrapper[4813]: I0217 08:54:26.913985 4813 generic.go:334] "Generic (PLEG): container finished" podID="63d43bda-eb95-4074-8415-8e7196bd950e" containerID="599022f722519d46ce354db1868f92601a6c3d5637574966b8d66e2afccd1467" exitCode=0 Feb 17 08:54:26 crc kubenswrapper[4813]: I0217 08:54:26.914036 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" event={"ID":"63d43bda-eb95-4074-8415-8e7196bd950e","Type":"ContainerDied","Data":"599022f722519d46ce354db1868f92601a6c3d5637574966b8d66e2afccd1467"} Feb 17 08:54:26 crc kubenswrapper[4813]: I0217 08:54:26.920091 4813 generic.go:334] "Generic (PLEG): container finished" podID="4544ab46-43e8-4a7a-99e2-085ad050ee79" containerID="9b98eb70ba4b6650d91d43820931152a56cbfb67ad933f2709b720dde10a6bc7" exitCode=0 Feb 17 08:54:26 crc kubenswrapper[4813]: I0217 08:54:26.920136 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxrlr" event={"ID":"4544ab46-43e8-4a7a-99e2-085ad050ee79","Type":"ContainerDied","Data":"9b98eb70ba4b6650d91d43820931152a56cbfb67ad933f2709b720dde10a6bc7"} Feb 17 08:54:26 crc kubenswrapper[4813]: I0217 08:54:26.920165 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxrlr" event={"ID":"4544ab46-43e8-4a7a-99e2-085ad050ee79","Type":"ContainerStarted","Data":"d39d82acefa42c834df6e1651fbb6c36896efd5453dc7a310e884f7df67cfecd"} Feb 17 08:54:27 crc kubenswrapper[4813]: I0217 08:54:27.930508 4813 generic.go:334] "Generic (PLEG): container finished" podID="63d43bda-eb95-4074-8415-8e7196bd950e" containerID="d3cb7c4c1e96834900072d0a01ea89659c7c430980df0eb33ed8d1035a90f830" exitCode=0 Feb 17 08:54:27 crc kubenswrapper[4813]: I0217 08:54:27.930875 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" event={"ID":"63d43bda-eb95-4074-8415-8e7196bd950e","Type":"ContainerDied","Data":"d3cb7c4c1e96834900072d0a01ea89659c7c430980df0eb33ed8d1035a90f830"} Feb 17 08:54:27 crc kubenswrapper[4813]: I0217 08:54:27.933736 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxrlr" event={"ID":"4544ab46-43e8-4a7a-99e2-085ad050ee79","Type":"ContainerStarted","Data":"5e20271c64e61f90ce4d195b462981c9a256c001001990ba168ad6100c8cc5e7"} Feb 17 08:54:28 crc kubenswrapper[4813]: I0217 08:54:28.942449 4813 generic.go:334] "Generic (PLEG): container finished" podID="4544ab46-43e8-4a7a-99e2-085ad050ee79" containerID="5e20271c64e61f90ce4d195b462981c9a256c001001990ba168ad6100c8cc5e7" exitCode=0 Feb 17 08:54:28 crc kubenswrapper[4813]: I0217 08:54:28.942507 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxrlr" event={"ID":"4544ab46-43e8-4a7a-99e2-085ad050ee79","Type":"ContainerDied","Data":"5e20271c64e61f90ce4d195b462981c9a256c001001990ba168ad6100c8cc5e7"} Feb 17 08:54:29 crc kubenswrapper[4813]: I0217 08:54:29.206357 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" Feb 17 08:54:29 crc kubenswrapper[4813]: I0217 08:54:29.342779 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63d43bda-eb95-4074-8415-8e7196bd950e-util\") pod \"63d43bda-eb95-4074-8415-8e7196bd950e\" (UID: \"63d43bda-eb95-4074-8415-8e7196bd950e\") " Feb 17 08:54:29 crc kubenswrapper[4813]: I0217 08:54:29.342851 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2mmg\" (UniqueName: \"kubernetes.io/projected/63d43bda-eb95-4074-8415-8e7196bd950e-kube-api-access-k2mmg\") pod \"63d43bda-eb95-4074-8415-8e7196bd950e\" (UID: \"63d43bda-eb95-4074-8415-8e7196bd950e\") " Feb 17 08:54:29 crc kubenswrapper[4813]: I0217 08:54:29.342963 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63d43bda-eb95-4074-8415-8e7196bd950e-bundle\") pod \"63d43bda-eb95-4074-8415-8e7196bd950e\" (UID: \"63d43bda-eb95-4074-8415-8e7196bd950e\") " Feb 17 08:54:29 crc kubenswrapper[4813]: I0217 08:54:29.344060 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63d43bda-eb95-4074-8415-8e7196bd950e-bundle" (OuterVolumeSpecName: "bundle") pod "63d43bda-eb95-4074-8415-8e7196bd950e" (UID: "63d43bda-eb95-4074-8415-8e7196bd950e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:54:29 crc kubenswrapper[4813]: I0217 08:54:29.350451 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63d43bda-eb95-4074-8415-8e7196bd950e-kube-api-access-k2mmg" (OuterVolumeSpecName: "kube-api-access-k2mmg") pod "63d43bda-eb95-4074-8415-8e7196bd950e" (UID: "63d43bda-eb95-4074-8415-8e7196bd950e"). InnerVolumeSpecName "kube-api-access-k2mmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:54:29 crc kubenswrapper[4813]: I0217 08:54:29.361536 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63d43bda-eb95-4074-8415-8e7196bd950e-util" (OuterVolumeSpecName: "util") pod "63d43bda-eb95-4074-8415-8e7196bd950e" (UID: "63d43bda-eb95-4074-8415-8e7196bd950e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:54:29 crc kubenswrapper[4813]: I0217 08:54:29.444402 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63d43bda-eb95-4074-8415-8e7196bd950e-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:54:29 crc kubenswrapper[4813]: I0217 08:54:29.444723 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63d43bda-eb95-4074-8415-8e7196bd950e-util\") on node \"crc\" DevicePath \"\"" Feb 17 08:54:29 crc kubenswrapper[4813]: I0217 08:54:29.444739 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2mmg\" (UniqueName: \"kubernetes.io/projected/63d43bda-eb95-4074-8415-8e7196bd950e-kube-api-access-k2mmg\") on node \"crc\" DevicePath \"\"" Feb 17 08:54:29 crc kubenswrapper[4813]: I0217 08:54:29.964841 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxrlr" event={"ID":"4544ab46-43e8-4a7a-99e2-085ad050ee79","Type":"ContainerStarted","Data":"7cad01d6bd98327193801da0362205eb69aa454b11b12d9bf6ebe08b62d41017"} Feb 17 08:54:29 crc kubenswrapper[4813]: I0217 08:54:29.967519 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" event={"ID":"63d43bda-eb95-4074-8415-8e7196bd950e","Type":"ContainerDied","Data":"4a6f550e83dfc3f501d885b0ffe01aa54627c4965b6b9aad6722591fa94f91b1"} Feb 17 08:54:29 crc kubenswrapper[4813]: I0217 08:54:29.967569 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a6f550e83dfc3f501d885b0ffe01aa54627c4965b6b9aad6722591fa94f91b1" Feb 17 08:54:29 crc kubenswrapper[4813]: I0217 08:54:29.967642 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx" Feb 17 08:54:29 crc kubenswrapper[4813]: I0217 08:54:29.989644 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hxrlr" podStartSLOduration=3.5410183870000003 podStartE2EDuration="5.989629511s" podCreationTimestamp="2026-02-17 08:54:24 +0000 UTC" firstStartedPulling="2026-02-17 08:54:26.92191497 +0000 UTC m=+814.582676193" lastFinishedPulling="2026-02-17 08:54:29.370526094 +0000 UTC m=+817.031287317" observedRunningTime="2026-02-17 08:54:29.986578924 +0000 UTC m=+817.647340187" watchObservedRunningTime="2026-02-17 08:54:29.989629511 +0000 UTC m=+817.650390724" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.176925 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.178529 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.567251 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-t7852"] Feb 17 08:54:35 crc kubenswrapper[4813]: E0217 08:54:35.567493 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d43bda-eb95-4074-8415-8e7196bd950e" containerName="extract" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.567505 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d43bda-eb95-4074-8415-8e7196bd950e" containerName="extract" Feb 17 08:54:35 crc kubenswrapper[4813]: E0217 08:54:35.567519 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d43bda-eb95-4074-8415-8e7196bd950e" containerName="pull" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.567525 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d43bda-eb95-4074-8415-8e7196bd950e" containerName="pull" Feb 17 08:54:35 crc kubenswrapper[4813]: E0217 08:54:35.567539 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d43bda-eb95-4074-8415-8e7196bd950e" containerName="util" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.567545 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d43bda-eb95-4074-8415-8e7196bd950e" containerName="util" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.567647 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="63d43bda-eb95-4074-8415-8e7196bd950e" containerName="extract" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.568230 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-t7852" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.573195 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.573336 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.573654 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-cq7n9" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.591941 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-t7852"] Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.731689 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhlwc\" (UniqueName: \"kubernetes.io/projected/12325fab-4e11-4510-98c8-c3a6fa691cbf-kube-api-access-jhlwc\") pod \"cert-manager-operator-controller-manager-66c8bdd694-t7852\" (UID: \"12325fab-4e11-4510-98c8-c3a6fa691cbf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-t7852" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.731738 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/12325fab-4e11-4510-98c8-c3a6fa691cbf-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-t7852\" (UID: \"12325fab-4e11-4510-98c8-c3a6fa691cbf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-t7852" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.834169 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhlwc\" (UniqueName: \"kubernetes.io/projected/12325fab-4e11-4510-98c8-c3a6fa691cbf-kube-api-access-jhlwc\") pod \"cert-manager-operator-controller-manager-66c8bdd694-t7852\" (UID: \"12325fab-4e11-4510-98c8-c3a6fa691cbf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-t7852" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.834240 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/12325fab-4e11-4510-98c8-c3a6fa691cbf-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-t7852\" (UID: \"12325fab-4e11-4510-98c8-c3a6fa691cbf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-t7852" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.834749 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/12325fab-4e11-4510-98c8-c3a6fa691cbf-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-t7852\" (UID: \"12325fab-4e11-4510-98c8-c3a6fa691cbf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-t7852" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.855773 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhlwc\" (UniqueName: \"kubernetes.io/projected/12325fab-4e11-4510-98c8-c3a6fa691cbf-kube-api-access-jhlwc\") pod \"cert-manager-operator-controller-manager-66c8bdd694-t7852\" (UID: \"12325fab-4e11-4510-98c8-c3a6fa691cbf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-t7852" Feb 17 08:54:35 crc kubenswrapper[4813]: I0217 08:54:35.883761 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-t7852" Feb 17 08:54:36 crc kubenswrapper[4813]: I0217 08:54:36.231201 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hxrlr" podUID="4544ab46-43e8-4a7a-99e2-085ad050ee79" containerName="registry-server" probeResult="failure" output=< Feb 17 08:54:36 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Feb 17 08:54:36 crc kubenswrapper[4813]: > Feb 17 08:54:36 crc kubenswrapper[4813]: I0217 08:54:36.353064 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-t7852"] Feb 17 08:54:36 crc kubenswrapper[4813]: W0217 08:54:36.360759 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12325fab_4e11_4510_98c8_c3a6fa691cbf.slice/crio-6593e0430c4397a5161e67b2a9dd90b31d0c1aecf2024190384bea00cb254e23 WatchSource:0}: Error finding container 6593e0430c4397a5161e67b2a9dd90b31d0c1aecf2024190384bea00cb254e23: Status 404 returned error can't find the container with id 6593e0430c4397a5161e67b2a9dd90b31d0c1aecf2024190384bea00cb254e23 Feb 17 08:54:37 crc kubenswrapper[4813]: I0217 08:54:37.010665 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-t7852" event={"ID":"12325fab-4e11-4510-98c8-c3a6fa691cbf","Type":"ContainerStarted","Data":"6593e0430c4397a5161e67b2a9dd90b31d0c1aecf2024190384bea00cb254e23"} Feb 17 08:54:40 crc kubenswrapper[4813]: I0217 08:54:40.033567 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-t7852" event={"ID":"12325fab-4e11-4510-98c8-c3a6fa691cbf","Type":"ContainerStarted","Data":"7d5854020688accdee53d6b37d09e08388d65e9b8d688b466ce0250dc1ef74ee"} Feb 17 08:54:40 crc kubenswrapper[4813]: I0217 08:54:40.063221 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-t7852" podStartSLOduration=1.894244916 podStartE2EDuration="5.063205744s" podCreationTimestamp="2026-02-17 08:54:35 +0000 UTC" firstStartedPulling="2026-02-17 08:54:36.365762504 +0000 UTC m=+824.026523727" lastFinishedPulling="2026-02-17 08:54:39.534723332 +0000 UTC m=+827.195484555" observedRunningTime="2026-02-17 08:54:40.052332204 +0000 UTC m=+827.713093427" watchObservedRunningTime="2026-02-17 08:54:40.063205744 +0000 UTC m=+827.723966967" Feb 17 08:54:44 crc kubenswrapper[4813]: I0217 08:54:44.033425 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-xl8fx"] Feb 17 08:54:44 crc kubenswrapper[4813]: I0217 08:54:44.034555 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-xl8fx" Feb 17 08:54:44 crc kubenswrapper[4813]: I0217 08:54:44.036077 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 17 08:54:44 crc kubenswrapper[4813]: I0217 08:54:44.036271 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 17 08:54:44 crc kubenswrapper[4813]: I0217 08:54:44.036480 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-m64hw" Feb 17 08:54:44 crc kubenswrapper[4813]: I0217 08:54:44.057881 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7ce963af-f2c1-47af-85f7-658fa10e6394-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-xl8fx\" (UID: \"7ce963af-f2c1-47af-85f7-658fa10e6394\") " pod="cert-manager/cert-manager-webhook-6888856db4-xl8fx" Feb 17 08:54:44 crc kubenswrapper[4813]: I0217 08:54:44.058260 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb4mg\" (UniqueName: \"kubernetes.io/projected/7ce963af-f2c1-47af-85f7-658fa10e6394-kube-api-access-zb4mg\") pod \"cert-manager-webhook-6888856db4-xl8fx\" (UID: \"7ce963af-f2c1-47af-85f7-658fa10e6394\") " pod="cert-manager/cert-manager-webhook-6888856db4-xl8fx" Feb 17 08:54:44 crc kubenswrapper[4813]: I0217 08:54:44.064106 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-xl8fx"] Feb 17 08:54:44 crc kubenswrapper[4813]: I0217 08:54:44.159191 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7ce963af-f2c1-47af-85f7-658fa10e6394-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-xl8fx\" (UID: \"7ce963af-f2c1-47af-85f7-658fa10e6394\") " pod="cert-manager/cert-manager-webhook-6888856db4-xl8fx" Feb 17 08:54:44 crc kubenswrapper[4813]: I0217 08:54:44.159242 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb4mg\" (UniqueName: \"kubernetes.io/projected/7ce963af-f2c1-47af-85f7-658fa10e6394-kube-api-access-zb4mg\") pod \"cert-manager-webhook-6888856db4-xl8fx\" (UID: \"7ce963af-f2c1-47af-85f7-658fa10e6394\") " pod="cert-manager/cert-manager-webhook-6888856db4-xl8fx" Feb 17 08:54:44 crc kubenswrapper[4813]: I0217 08:54:44.177580 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7ce963af-f2c1-47af-85f7-658fa10e6394-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-xl8fx\" (UID: \"7ce963af-f2c1-47af-85f7-658fa10e6394\") " pod="cert-manager/cert-manager-webhook-6888856db4-xl8fx" Feb 17 08:54:44 crc kubenswrapper[4813]: I0217 08:54:44.177744 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb4mg\" (UniqueName: \"kubernetes.io/projected/7ce963af-f2c1-47af-85f7-658fa10e6394-kube-api-access-zb4mg\") pod \"cert-manager-webhook-6888856db4-xl8fx\" (UID: \"7ce963af-f2c1-47af-85f7-658fa10e6394\") " pod="cert-manager/cert-manager-webhook-6888856db4-xl8fx" Feb 17 08:54:44 crc kubenswrapper[4813]: I0217 08:54:44.353636 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-xl8fx" Feb 17 08:54:44 crc kubenswrapper[4813]: I0217 08:54:44.745859 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-xl8fx"] Feb 17 08:54:45 crc kubenswrapper[4813]: I0217 08:54:45.065508 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-xl8fx" event={"ID":"7ce963af-f2c1-47af-85f7-658fa10e6394","Type":"ContainerStarted","Data":"e8b4dc5c411dc8838786ff58eb2502e6fe51234d55cf6b560d8ecc6b280b77dc"} Feb 17 08:54:45 crc kubenswrapper[4813]: I0217 08:54:45.227831 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:45 crc kubenswrapper[4813]: I0217 08:54:45.289968 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:46 crc kubenswrapper[4813]: I0217 08:54:46.669624 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-2vthk"] Feb 17 08:54:46 crc kubenswrapper[4813]: I0217 08:54:46.671526 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-2vthk" Feb 17 08:54:46 crc kubenswrapper[4813]: W0217 08:54:46.674060 4813 reflector.go:561] object-"cert-manager"/"cert-manager-cainjector-dockercfg-95q7c": failed to list *v1.Secret: secrets "cert-manager-cainjector-dockercfg-95q7c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Feb 17 08:54:46 crc kubenswrapper[4813]: E0217 08:54:46.674118 4813 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-95q7c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-manager-cainjector-dockercfg-95q7c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 08:54:46 crc kubenswrapper[4813]: I0217 08:54:46.698989 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-2vthk"] Feb 17 08:54:46 crc kubenswrapper[4813]: I0217 08:54:46.706328 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3e18af6-b78a-4769-b2d3-86769c0f5c93-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-2vthk\" (UID: \"b3e18af6-b78a-4769-b2d3-86769c0f5c93\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2vthk" Feb 17 08:54:46 crc kubenswrapper[4813]: I0217 08:54:46.706403 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8pnv\" (UniqueName: \"kubernetes.io/projected/b3e18af6-b78a-4769-b2d3-86769c0f5c93-kube-api-access-r8pnv\") pod \"cert-manager-cainjector-5545bd876-2vthk\" (UID: \"b3e18af6-b78a-4769-b2d3-86769c0f5c93\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2vthk" Feb 17 08:54:46 crc kubenswrapper[4813]: I0217 08:54:46.807183 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3e18af6-b78a-4769-b2d3-86769c0f5c93-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-2vthk\" (UID: \"b3e18af6-b78a-4769-b2d3-86769c0f5c93\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2vthk" Feb 17 08:54:46 crc kubenswrapper[4813]: I0217 08:54:46.807678 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8pnv\" (UniqueName: \"kubernetes.io/projected/b3e18af6-b78a-4769-b2d3-86769c0f5c93-kube-api-access-r8pnv\") pod \"cert-manager-cainjector-5545bd876-2vthk\" (UID: \"b3e18af6-b78a-4769-b2d3-86769c0f5c93\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2vthk" Feb 17 08:54:46 crc kubenswrapper[4813]: I0217 08:54:46.834781 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8pnv\" (UniqueName: \"kubernetes.io/projected/b3e18af6-b78a-4769-b2d3-86769c0f5c93-kube-api-access-r8pnv\") pod \"cert-manager-cainjector-5545bd876-2vthk\" (UID: \"b3e18af6-b78a-4769-b2d3-86769c0f5c93\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2vthk" Feb 17 08:54:46 crc kubenswrapper[4813]: I0217 08:54:46.842909 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3e18af6-b78a-4769-b2d3-86769c0f5c93-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-2vthk\" (UID: \"b3e18af6-b78a-4769-b2d3-86769c0f5c93\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2vthk" Feb 17 08:54:47 crc kubenswrapper[4813]: I0217 08:54:47.600797 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hxrlr"] Feb 17 08:54:47 crc kubenswrapper[4813]: I0217 08:54:47.601175 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hxrlr" podUID="4544ab46-43e8-4a7a-99e2-085ad050ee79" containerName="registry-server" containerID="cri-o://7cad01d6bd98327193801da0362205eb69aa454b11b12d9bf6ebe08b62d41017" gracePeriod=2 Feb 17 08:54:47 crc kubenswrapper[4813]: I0217 08:54:47.995075 4813 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="cert-manager/cert-manager-cainjector-5545bd876-2vthk" secret="" err="failed to sync secret cache: timed out waiting for the condition" Feb 17 08:54:47 crc kubenswrapper[4813]: I0217 08:54:47.995143 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-2vthk" Feb 17 08:54:48 crc kubenswrapper[4813]: I0217 08:54:48.089464 4813 generic.go:334] "Generic (PLEG): container finished" podID="4544ab46-43e8-4a7a-99e2-085ad050ee79" containerID="7cad01d6bd98327193801da0362205eb69aa454b11b12d9bf6ebe08b62d41017" exitCode=0 Feb 17 08:54:48 crc kubenswrapper[4813]: I0217 08:54:48.089516 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxrlr" event={"ID":"4544ab46-43e8-4a7a-99e2-085ad050ee79","Type":"ContainerDied","Data":"7cad01d6bd98327193801da0362205eb69aa454b11b12d9bf6ebe08b62d41017"} Feb 17 08:54:48 crc kubenswrapper[4813]: I0217 08:54:48.207159 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-95q7c" Feb 17 08:54:49 crc kubenswrapper[4813]: I0217 08:54:49.473616 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:49 crc kubenswrapper[4813]: I0217 08:54:49.502364 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-2vthk"] Feb 17 08:54:49 crc kubenswrapper[4813]: W0217 08:54:49.509818 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3e18af6_b78a_4769_b2d3_86769c0f5c93.slice/crio-a19b3067706ad30ca4bcb805e99705e87ce7762c4e99ea16a87b24c137f57ffc WatchSource:0}: Error finding container a19b3067706ad30ca4bcb805e99705e87ce7762c4e99ea16a87b24c137f57ffc: Status 404 returned error can't find the container with id a19b3067706ad30ca4bcb805e99705e87ce7762c4e99ea16a87b24c137f57ffc Feb 17 08:54:49 crc kubenswrapper[4813]: I0217 08:54:49.569419 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpg82\" (UniqueName: \"kubernetes.io/projected/4544ab46-43e8-4a7a-99e2-085ad050ee79-kube-api-access-zpg82\") pod \"4544ab46-43e8-4a7a-99e2-085ad050ee79\" (UID: \"4544ab46-43e8-4a7a-99e2-085ad050ee79\") " Feb 17 08:54:49 crc kubenswrapper[4813]: I0217 08:54:49.569513 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4544ab46-43e8-4a7a-99e2-085ad050ee79-utilities\") pod \"4544ab46-43e8-4a7a-99e2-085ad050ee79\" (UID: \"4544ab46-43e8-4a7a-99e2-085ad050ee79\") " Feb 17 08:54:49 crc kubenswrapper[4813]: I0217 08:54:49.569557 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4544ab46-43e8-4a7a-99e2-085ad050ee79-catalog-content\") pod \"4544ab46-43e8-4a7a-99e2-085ad050ee79\" (UID: \"4544ab46-43e8-4a7a-99e2-085ad050ee79\") " Feb 17 08:54:49 crc kubenswrapper[4813]: I0217 08:54:49.570625 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4544ab46-43e8-4a7a-99e2-085ad050ee79-utilities" (OuterVolumeSpecName: "utilities") pod "4544ab46-43e8-4a7a-99e2-085ad050ee79" (UID: "4544ab46-43e8-4a7a-99e2-085ad050ee79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:54:49 crc kubenswrapper[4813]: I0217 08:54:49.580109 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4544ab46-43e8-4a7a-99e2-085ad050ee79-kube-api-access-zpg82" (OuterVolumeSpecName: "kube-api-access-zpg82") pod "4544ab46-43e8-4a7a-99e2-085ad050ee79" (UID: "4544ab46-43e8-4a7a-99e2-085ad050ee79"). InnerVolumeSpecName "kube-api-access-zpg82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:54:49 crc kubenswrapper[4813]: I0217 08:54:49.671050 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpg82\" (UniqueName: \"kubernetes.io/projected/4544ab46-43e8-4a7a-99e2-085ad050ee79-kube-api-access-zpg82\") on node \"crc\" DevicePath \"\"" Feb 17 08:54:49 crc kubenswrapper[4813]: I0217 08:54:49.671106 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4544ab46-43e8-4a7a-99e2-085ad050ee79-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 08:54:49 crc kubenswrapper[4813]: I0217 08:54:49.691684 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4544ab46-43e8-4a7a-99e2-085ad050ee79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4544ab46-43e8-4a7a-99e2-085ad050ee79" (UID: "4544ab46-43e8-4a7a-99e2-085ad050ee79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:54:49 crc kubenswrapper[4813]: I0217 08:54:49.772290 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4544ab46-43e8-4a7a-99e2-085ad050ee79-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 08:54:50 crc kubenswrapper[4813]: I0217 08:54:50.123082 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-xl8fx" event={"ID":"7ce963af-f2c1-47af-85f7-658fa10e6394","Type":"ContainerStarted","Data":"64a71f961aa3bc74e27c189b03309ee0637fa36b2e0bf01c507fbdcfd893806f"} Feb 17 08:54:50 crc kubenswrapper[4813]: I0217 08:54:50.123971 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-xl8fx" Feb 17 08:54:50 crc kubenswrapper[4813]: I0217 08:54:50.134198 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hxrlr" event={"ID":"4544ab46-43e8-4a7a-99e2-085ad050ee79","Type":"ContainerDied","Data":"d39d82acefa42c834df6e1651fbb6c36896efd5453dc7a310e884f7df67cfecd"} Feb 17 08:54:50 crc kubenswrapper[4813]: I0217 08:54:50.134270 4813 scope.go:117] "RemoveContainer" containerID="7cad01d6bd98327193801da0362205eb69aa454b11b12d9bf6ebe08b62d41017" Feb 17 08:54:50 crc kubenswrapper[4813]: I0217 08:54:50.134508 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hxrlr" Feb 17 08:54:50 crc kubenswrapper[4813]: I0217 08:54:50.140899 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-2vthk" event={"ID":"b3e18af6-b78a-4769-b2d3-86769c0f5c93","Type":"ContainerStarted","Data":"9d06e33b683f5e88a18cfdde5d3fa3851e1d49f622156a47e8133fd9c265534f"} Feb 17 08:54:50 crc kubenswrapper[4813]: I0217 08:54:50.140976 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-2vthk" event={"ID":"b3e18af6-b78a-4769-b2d3-86769c0f5c93","Type":"ContainerStarted","Data":"a19b3067706ad30ca4bcb805e99705e87ce7762c4e99ea16a87b24c137f57ffc"} Feb 17 08:54:50 crc kubenswrapper[4813]: I0217 08:54:50.148788 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-xl8fx" podStartSLOduration=1.735165364 podStartE2EDuration="6.148766598s" podCreationTimestamp="2026-02-17 08:54:44 +0000 UTC" firstStartedPulling="2026-02-17 08:54:44.754053923 +0000 UTC m=+832.414815146" lastFinishedPulling="2026-02-17 08:54:49.167655147 +0000 UTC m=+836.828416380" observedRunningTime="2026-02-17 08:54:50.147392969 +0000 UTC m=+837.808154202" watchObservedRunningTime="2026-02-17 08:54:50.148766598 +0000 UTC m=+837.809527831" Feb 17 08:54:50 crc kubenswrapper[4813]: I0217 08:54:50.171504 4813 scope.go:117] "RemoveContainer" containerID="5e20271c64e61f90ce4d195b462981c9a256c001001990ba168ad6100c8cc5e7" Feb 17 08:54:50 crc kubenswrapper[4813]: I0217 08:54:50.187388 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-2vthk" podStartSLOduration=4.187371059 podStartE2EDuration="4.187371059s" podCreationTimestamp="2026-02-17 08:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:54:50.185645799 +0000 UTC m=+837.846407032" watchObservedRunningTime="2026-02-17 08:54:50.187371059 +0000 UTC m=+837.848132292" Feb 17 08:54:50 crc kubenswrapper[4813]: I0217 08:54:50.201351 4813 scope.go:117] "RemoveContainer" containerID="9b98eb70ba4b6650d91d43820931152a56cbfb67ad933f2709b720dde10a6bc7" Feb 17 08:54:50 crc kubenswrapper[4813]: I0217 08:54:50.209952 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hxrlr"] Feb 17 08:54:50 crc kubenswrapper[4813]: I0217 08:54:50.265537 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hxrlr"] Feb 17 08:54:51 crc kubenswrapper[4813]: I0217 08:54:51.119653 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4544ab46-43e8-4a7a-99e2-085ad050ee79" path="/var/lib/kubelet/pods/4544ab46-43e8-4a7a-99e2-085ad050ee79/volumes" Feb 17 08:54:54 crc kubenswrapper[4813]: I0217 08:54:54.356554 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-xl8fx" Feb 17 08:54:57 crc kubenswrapper[4813]: I0217 08:54:57.153411 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-xr8mp"] Feb 17 08:54:57 crc kubenswrapper[4813]: E0217 08:54:57.154060 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4544ab46-43e8-4a7a-99e2-085ad050ee79" containerName="extract-utilities" Feb 17 08:54:57 crc kubenswrapper[4813]: I0217 08:54:57.154076 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4544ab46-43e8-4a7a-99e2-085ad050ee79" containerName="extract-utilities" Feb 17 08:54:57 crc kubenswrapper[4813]: E0217 08:54:57.154093 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4544ab46-43e8-4a7a-99e2-085ad050ee79" containerName="registry-server" Feb 17 08:54:57 crc kubenswrapper[4813]: I0217 08:54:57.154101 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4544ab46-43e8-4a7a-99e2-085ad050ee79" containerName="registry-server" Feb 17 08:54:57 crc kubenswrapper[4813]: E0217 08:54:57.154112 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4544ab46-43e8-4a7a-99e2-085ad050ee79" containerName="extract-content" Feb 17 08:54:57 crc kubenswrapper[4813]: I0217 08:54:57.154120 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4544ab46-43e8-4a7a-99e2-085ad050ee79" containerName="extract-content" Feb 17 08:54:57 crc kubenswrapper[4813]: I0217 08:54:57.154267 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4544ab46-43e8-4a7a-99e2-085ad050ee79" containerName="registry-server" Feb 17 08:54:57 crc kubenswrapper[4813]: I0217 08:54:57.154826 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-xr8mp" Feb 17 08:54:57 crc kubenswrapper[4813]: I0217 08:54:57.157414 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-dswl5" Feb 17 08:54:57 crc kubenswrapper[4813]: I0217 08:54:57.159118 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-xr8mp"] Feb 17 08:54:57 crc kubenswrapper[4813]: I0217 08:54:57.311248 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pmqw\" (UniqueName: \"kubernetes.io/projected/af53b136-297b-434b-9ff8-47ba49480ed0-kube-api-access-8pmqw\") pod \"cert-manager-545d4d4674-xr8mp\" (UID: \"af53b136-297b-434b-9ff8-47ba49480ed0\") " pod="cert-manager/cert-manager-545d4d4674-xr8mp" Feb 17 08:54:57 crc kubenswrapper[4813]: I0217 08:54:57.311321 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/af53b136-297b-434b-9ff8-47ba49480ed0-bound-sa-token\") pod \"cert-manager-545d4d4674-xr8mp\" (UID: \"af53b136-297b-434b-9ff8-47ba49480ed0\") " pod="cert-manager/cert-manager-545d4d4674-xr8mp" Feb 17 08:54:57 crc kubenswrapper[4813]: I0217 08:54:57.412527 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pmqw\" (UniqueName: \"kubernetes.io/projected/af53b136-297b-434b-9ff8-47ba49480ed0-kube-api-access-8pmqw\") pod \"cert-manager-545d4d4674-xr8mp\" (UID: \"af53b136-297b-434b-9ff8-47ba49480ed0\") " pod="cert-manager/cert-manager-545d4d4674-xr8mp" Feb 17 08:54:57 crc kubenswrapper[4813]: I0217 08:54:57.412579 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/af53b136-297b-434b-9ff8-47ba49480ed0-bound-sa-token\") pod \"cert-manager-545d4d4674-xr8mp\" (UID: \"af53b136-297b-434b-9ff8-47ba49480ed0\") " pod="cert-manager/cert-manager-545d4d4674-xr8mp" Feb 17 08:54:57 crc kubenswrapper[4813]: I0217 08:54:57.430581 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/af53b136-297b-434b-9ff8-47ba49480ed0-bound-sa-token\") pod \"cert-manager-545d4d4674-xr8mp\" (UID: \"af53b136-297b-434b-9ff8-47ba49480ed0\") " pod="cert-manager/cert-manager-545d4d4674-xr8mp" Feb 17 08:54:57 crc kubenswrapper[4813]: I0217 08:54:57.430781 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pmqw\" (UniqueName: \"kubernetes.io/projected/af53b136-297b-434b-9ff8-47ba49480ed0-kube-api-access-8pmqw\") pod \"cert-manager-545d4d4674-xr8mp\" (UID: \"af53b136-297b-434b-9ff8-47ba49480ed0\") " pod="cert-manager/cert-manager-545d4d4674-xr8mp" Feb 17 08:54:57 crc kubenswrapper[4813]: I0217 08:54:57.471867 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-xr8mp" Feb 17 08:54:57 crc kubenswrapper[4813]: W0217 08:54:57.701368 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf53b136_297b_434b_9ff8_47ba49480ed0.slice/crio-9477f150731d44a82e57e687e7726786bbdc92d00966126419051e5e76860abc WatchSource:0}: Error finding container 9477f150731d44a82e57e687e7726786bbdc92d00966126419051e5e76860abc: Status 404 returned error can't find the container with id 9477f150731d44a82e57e687e7726786bbdc92d00966126419051e5e76860abc Feb 17 08:54:57 crc kubenswrapper[4813]: I0217 08:54:57.702668 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-xr8mp"] Feb 17 08:54:58 crc kubenswrapper[4813]: I0217 08:54:58.197361 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-xr8mp" event={"ID":"af53b136-297b-434b-9ff8-47ba49480ed0","Type":"ContainerStarted","Data":"369b637f113c1652d6f01484738a2eb369b4d95364c5fe3e1feb49b15e53ff39"} Feb 17 08:54:58 crc kubenswrapper[4813]: I0217 08:54:58.197449 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-xr8mp" event={"ID":"af53b136-297b-434b-9ff8-47ba49480ed0","Type":"ContainerStarted","Data":"9477f150731d44a82e57e687e7726786bbdc92d00966126419051e5e76860abc"} Feb 17 08:54:58 crc kubenswrapper[4813]: I0217 08:54:58.224014 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-xr8mp" podStartSLOduration=1.223982628 podStartE2EDuration="1.223982628s" podCreationTimestamp="2026-02-17 08:54:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:54:58.214794586 +0000 UTC m=+845.875555819" watchObservedRunningTime="2026-02-17 08:54:58.223982628 +0000 UTC m=+845.884743851" Feb 17 08:55:11 crc kubenswrapper[4813]: I0217 08:55:11.213138 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gf7nf"] Feb 17 08:55:11 crc kubenswrapper[4813]: I0217 08:55:11.214790 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gf7nf" Feb 17 08:55:11 crc kubenswrapper[4813]: I0217 08:55:11.217160 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 17 08:55:11 crc kubenswrapper[4813]: I0217 08:55:11.218146 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-99mc6" Feb 17 08:55:11 crc kubenswrapper[4813]: I0217 08:55:11.218868 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 17 08:55:11 crc kubenswrapper[4813]: I0217 08:55:11.237211 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gf7nf"] Feb 17 08:55:11 crc kubenswrapper[4813]: I0217 08:55:11.326504 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb79j\" (UniqueName: \"kubernetes.io/projected/45535d9b-8d39-4107-9fdf-b3ab0bc84df6-kube-api-access-sb79j\") pod \"openstack-operator-index-gf7nf\" (UID: \"45535d9b-8d39-4107-9fdf-b3ab0bc84df6\") " pod="openstack-operators/openstack-operator-index-gf7nf" Feb 17 08:55:11 crc kubenswrapper[4813]: I0217 08:55:11.427862 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb79j\" (UniqueName: \"kubernetes.io/projected/45535d9b-8d39-4107-9fdf-b3ab0bc84df6-kube-api-access-sb79j\") pod \"openstack-operator-index-gf7nf\" (UID: \"45535d9b-8d39-4107-9fdf-b3ab0bc84df6\") " pod="openstack-operators/openstack-operator-index-gf7nf" Feb 17 08:55:11 crc kubenswrapper[4813]: I0217 08:55:11.453089 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb79j\" (UniqueName: \"kubernetes.io/projected/45535d9b-8d39-4107-9fdf-b3ab0bc84df6-kube-api-access-sb79j\") pod \"openstack-operator-index-gf7nf\" (UID: \"45535d9b-8d39-4107-9fdf-b3ab0bc84df6\") " pod="openstack-operators/openstack-operator-index-gf7nf" Feb 17 08:55:11 crc kubenswrapper[4813]: I0217 08:55:11.545131 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gf7nf" Feb 17 08:55:11 crc kubenswrapper[4813]: I0217 08:55:11.958983 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gf7nf"] Feb 17 08:55:12 crc kubenswrapper[4813]: I0217 08:55:12.307971 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gf7nf" event={"ID":"45535d9b-8d39-4107-9fdf-b3ab0bc84df6","Type":"ContainerStarted","Data":"68c9b06d0b8c39cafd10f44f3b9e0119df355e65c5e0288c65483202df1e9327"} Feb 17 08:55:15 crc kubenswrapper[4813]: I0217 08:55:15.329988 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gf7nf" event={"ID":"45535d9b-8d39-4107-9fdf-b3ab0bc84df6","Type":"ContainerStarted","Data":"abb3e9b5a10b8ba35bed72be300c5b534e95acf7f3e3945cfce20be670bd7f17"} Feb 17 08:55:15 crc kubenswrapper[4813]: I0217 08:55:15.357633 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gf7nf" podStartSLOduration=2.064946264 podStartE2EDuration="4.357605649s" podCreationTimestamp="2026-02-17 08:55:11 +0000 UTC" firstStartedPulling="2026-02-17 08:55:11.976740759 +0000 UTC m=+859.637502002" lastFinishedPulling="2026-02-17 08:55:14.269400164 +0000 UTC m=+861.930161387" observedRunningTime="2026-02-17 08:55:15.352110903 +0000 UTC m=+863.012872206" watchObservedRunningTime="2026-02-17 08:55:15.357605649 +0000 UTC m=+863.018366882" Feb 17 08:55:16 crc kubenswrapper[4813]: I0217 08:55:16.394623 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-gf7nf"] Feb 17 08:55:17 crc kubenswrapper[4813]: I0217 08:55:17.001129 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-59h5m"] Feb 17 08:55:17 crc kubenswrapper[4813]: I0217 08:55:17.002284 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-59h5m" Feb 17 08:55:17 crc kubenswrapper[4813]: I0217 08:55:17.011025 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-59h5m"] Feb 17 08:55:17 crc kubenswrapper[4813]: I0217 08:55:17.116214 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp9s2\" (UniqueName: \"kubernetes.io/projected/63faca9e-4dcf-4e1f-a3d7-077b0d8e593f-kube-api-access-vp9s2\") pod \"openstack-operator-index-59h5m\" (UID: \"63faca9e-4dcf-4e1f-a3d7-077b0d8e593f\") " pod="openstack-operators/openstack-operator-index-59h5m" Feb 17 08:55:17 crc kubenswrapper[4813]: I0217 08:55:17.218059 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp9s2\" (UniqueName: \"kubernetes.io/projected/63faca9e-4dcf-4e1f-a3d7-077b0d8e593f-kube-api-access-vp9s2\") pod \"openstack-operator-index-59h5m\" (UID: \"63faca9e-4dcf-4e1f-a3d7-077b0d8e593f\") " pod="openstack-operators/openstack-operator-index-59h5m" Feb 17 08:55:17 crc kubenswrapper[4813]: I0217 08:55:17.246806 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp9s2\" (UniqueName: \"kubernetes.io/projected/63faca9e-4dcf-4e1f-a3d7-077b0d8e593f-kube-api-access-vp9s2\") pod \"openstack-operator-index-59h5m\" (UID: \"63faca9e-4dcf-4e1f-a3d7-077b0d8e593f\") " pod="openstack-operators/openstack-operator-index-59h5m" Feb 17 08:55:17 crc kubenswrapper[4813]: I0217 08:55:17.320964 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-59h5m" Feb 17 08:55:17 crc kubenswrapper[4813]: I0217 08:55:17.343563 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-gf7nf" podUID="45535d9b-8d39-4107-9fdf-b3ab0bc84df6" containerName="registry-server" containerID="cri-o://abb3e9b5a10b8ba35bed72be300c5b534e95acf7f3e3945cfce20be670bd7f17" gracePeriod=2 Feb 17 08:55:17 crc kubenswrapper[4813]: I0217 08:55:17.773101 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gf7nf" Feb 17 08:55:17 crc kubenswrapper[4813]: I0217 08:55:17.800218 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-59h5m"] Feb 17 08:55:17 crc kubenswrapper[4813]: W0217 08:55:17.809345 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63faca9e_4dcf_4e1f_a3d7_077b0d8e593f.slice/crio-af73cab55d8424c312c052d60c56fbec54a0ca912e48e046a74a3961e4290db2 WatchSource:0}: Error finding container af73cab55d8424c312c052d60c56fbec54a0ca912e48e046a74a3961e4290db2: Status 404 returned error can't find the container with id af73cab55d8424c312c052d60c56fbec54a0ca912e48e046a74a3961e4290db2 Feb 17 08:55:17 crc kubenswrapper[4813]: I0217 08:55:17.829414 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb79j\" (UniqueName: \"kubernetes.io/projected/45535d9b-8d39-4107-9fdf-b3ab0bc84df6-kube-api-access-sb79j\") pod \"45535d9b-8d39-4107-9fdf-b3ab0bc84df6\" (UID: \"45535d9b-8d39-4107-9fdf-b3ab0bc84df6\") " Feb 17 08:55:17 crc kubenswrapper[4813]: I0217 08:55:17.834260 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45535d9b-8d39-4107-9fdf-b3ab0bc84df6-kube-api-access-sb79j" (OuterVolumeSpecName: "kube-api-access-sb79j") pod "45535d9b-8d39-4107-9fdf-b3ab0bc84df6" (UID: "45535d9b-8d39-4107-9fdf-b3ab0bc84df6"). InnerVolumeSpecName "kube-api-access-sb79j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:55:17 crc kubenswrapper[4813]: I0217 08:55:17.931342 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb79j\" (UniqueName: \"kubernetes.io/projected/45535d9b-8d39-4107-9fdf-b3ab0bc84df6-kube-api-access-sb79j\") on node \"crc\" DevicePath \"\"" Feb 17 08:55:18 crc kubenswrapper[4813]: I0217 08:55:18.351907 4813 generic.go:334] "Generic (PLEG): container finished" podID="45535d9b-8d39-4107-9fdf-b3ab0bc84df6" containerID="abb3e9b5a10b8ba35bed72be300c5b534e95acf7f3e3945cfce20be670bd7f17" exitCode=0 Feb 17 08:55:18 crc kubenswrapper[4813]: I0217 08:55:18.351979 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gf7nf" Feb 17 08:55:18 crc kubenswrapper[4813]: I0217 08:55:18.352005 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gf7nf" event={"ID":"45535d9b-8d39-4107-9fdf-b3ab0bc84df6","Type":"ContainerDied","Data":"abb3e9b5a10b8ba35bed72be300c5b534e95acf7f3e3945cfce20be670bd7f17"} Feb 17 08:55:18 crc kubenswrapper[4813]: I0217 08:55:18.352581 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gf7nf" event={"ID":"45535d9b-8d39-4107-9fdf-b3ab0bc84df6","Type":"ContainerDied","Data":"68c9b06d0b8c39cafd10f44f3b9e0119df355e65c5e0288c65483202df1e9327"} Feb 17 08:55:18 crc kubenswrapper[4813]: I0217 08:55:18.352615 4813 scope.go:117] "RemoveContainer" containerID="abb3e9b5a10b8ba35bed72be300c5b534e95acf7f3e3945cfce20be670bd7f17" Feb 17 08:55:18 crc kubenswrapper[4813]: I0217 08:55:18.354606 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-59h5m" event={"ID":"63faca9e-4dcf-4e1f-a3d7-077b0d8e593f","Type":"ContainerStarted","Data":"201b61d52243c20ffb3417d1b2fdeff4922eaff5b196d2ea1b258418d437706e"} Feb 17 08:55:18 crc kubenswrapper[4813]: I0217 08:55:18.354677 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-59h5m" event={"ID":"63faca9e-4dcf-4e1f-a3d7-077b0d8e593f","Type":"ContainerStarted","Data":"af73cab55d8424c312c052d60c56fbec54a0ca912e48e046a74a3961e4290db2"} Feb 17 08:55:18 crc kubenswrapper[4813]: I0217 08:55:18.381569 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-59h5m" podStartSLOduration=2.326419729 podStartE2EDuration="2.381549021s" podCreationTimestamp="2026-02-17 08:55:16 +0000 UTC" firstStartedPulling="2026-02-17 08:55:17.813439268 +0000 UTC m=+865.474200491" lastFinishedPulling="2026-02-17 08:55:17.86856856 +0000 UTC m=+865.529329783" observedRunningTime="2026-02-17 08:55:18.378410061 +0000 UTC m=+866.039171294" watchObservedRunningTime="2026-02-17 08:55:18.381549021 +0000 UTC m=+866.042310254" Feb 17 08:55:18 crc kubenswrapper[4813]: I0217 08:55:18.383366 4813 scope.go:117] "RemoveContainer" containerID="abb3e9b5a10b8ba35bed72be300c5b534e95acf7f3e3945cfce20be670bd7f17" Feb 17 08:55:18 crc kubenswrapper[4813]: E0217 08:55:18.384475 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abb3e9b5a10b8ba35bed72be300c5b534e95acf7f3e3945cfce20be670bd7f17\": container with ID starting with abb3e9b5a10b8ba35bed72be300c5b534e95acf7f3e3945cfce20be670bd7f17 not found: ID does not exist" containerID="abb3e9b5a10b8ba35bed72be300c5b534e95acf7f3e3945cfce20be670bd7f17" Feb 17 08:55:18 crc kubenswrapper[4813]: I0217 08:55:18.384732 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb3e9b5a10b8ba35bed72be300c5b534e95acf7f3e3945cfce20be670bd7f17"} err="failed to get container status \"abb3e9b5a10b8ba35bed72be300c5b534e95acf7f3e3945cfce20be670bd7f17\": rpc error: code = NotFound desc = could not find container \"abb3e9b5a10b8ba35bed72be300c5b534e95acf7f3e3945cfce20be670bd7f17\": container with ID starting with abb3e9b5a10b8ba35bed72be300c5b534e95acf7f3e3945cfce20be670bd7f17 not found: ID does not exist" Feb 17 08:55:18 crc kubenswrapper[4813]: I0217 08:55:18.398689 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-gf7nf"] Feb 17 08:55:18 crc kubenswrapper[4813]: I0217 08:55:18.403780 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-gf7nf"] Feb 17 08:55:19 crc kubenswrapper[4813]: I0217 08:55:19.126090 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45535d9b-8d39-4107-9fdf-b3ab0bc84df6" path="/var/lib/kubelet/pods/45535d9b-8d39-4107-9fdf-b3ab0bc84df6/volumes" Feb 17 08:55:27 crc kubenswrapper[4813]: I0217 08:55:27.322141 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-59h5m" Feb 17 08:55:27 crc kubenswrapper[4813]: I0217 08:55:27.322975 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-59h5m" Feb 17 08:55:27 crc kubenswrapper[4813]: I0217 08:55:27.376849 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-59h5m" Feb 17 08:55:27 crc kubenswrapper[4813]: I0217 08:55:27.460640 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-59h5m" Feb 17 08:55:28 crc kubenswrapper[4813]: I0217 08:55:28.478664 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg"] Feb 17 08:55:28 crc kubenswrapper[4813]: E0217 08:55:28.479164 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45535d9b-8d39-4107-9fdf-b3ab0bc84df6" containerName="registry-server" Feb 17 08:55:28 crc kubenswrapper[4813]: I0217 08:55:28.479178 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="45535d9b-8d39-4107-9fdf-b3ab0bc84df6" containerName="registry-server" Feb 17 08:55:28 crc kubenswrapper[4813]: I0217 08:55:28.479371 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="45535d9b-8d39-4107-9fdf-b3ab0bc84df6" containerName="registry-server" Feb 17 08:55:28 crc kubenswrapper[4813]: I0217 08:55:28.480254 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" Feb 17 08:55:28 crc kubenswrapper[4813]: I0217 08:55:28.481936 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-fb5f5" Feb 17 08:55:28 crc kubenswrapper[4813]: I0217 08:55:28.488615 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg"] Feb 17 08:55:28 crc kubenswrapper[4813]: I0217 08:55:28.628031 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9h7m\" (UniqueName: \"kubernetes.io/projected/1d59ab30-e7e4-4056-a1ad-2ee71696466c-kube-api-access-w9h7m\") pod \"69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg\" (UID: \"1d59ab30-e7e4-4056-a1ad-2ee71696466c\") " pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" Feb 17 08:55:28 crc kubenswrapper[4813]: I0217 08:55:28.628398 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d59ab30-e7e4-4056-a1ad-2ee71696466c-util\") pod \"69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg\" (UID: \"1d59ab30-e7e4-4056-a1ad-2ee71696466c\") " pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" Feb 17 08:55:28 crc kubenswrapper[4813]: I0217 08:55:28.628507 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d59ab30-e7e4-4056-a1ad-2ee71696466c-bundle\") pod \"69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg\" (UID: \"1d59ab30-e7e4-4056-a1ad-2ee71696466c\") " pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" Feb 17 08:55:28 crc kubenswrapper[4813]: I0217 08:55:28.729816 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9h7m\" (UniqueName: \"kubernetes.io/projected/1d59ab30-e7e4-4056-a1ad-2ee71696466c-kube-api-access-w9h7m\") pod \"69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg\" (UID: \"1d59ab30-e7e4-4056-a1ad-2ee71696466c\") " pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" Feb 17 08:55:28 crc kubenswrapper[4813]: I0217 08:55:28.730044 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d59ab30-e7e4-4056-a1ad-2ee71696466c-util\") pod \"69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg\" (UID: \"1d59ab30-e7e4-4056-a1ad-2ee71696466c\") " pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" Feb 17 08:55:28 crc kubenswrapper[4813]: I0217 08:55:28.730113 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d59ab30-e7e4-4056-a1ad-2ee71696466c-bundle\") pod \"69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg\" (UID: \"1d59ab30-e7e4-4056-a1ad-2ee71696466c\") " pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" Feb 17 08:55:28 crc kubenswrapper[4813]: I0217 08:55:28.730600 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d59ab30-e7e4-4056-a1ad-2ee71696466c-bundle\") pod \"69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg\" (UID: \"1d59ab30-e7e4-4056-a1ad-2ee71696466c\") " pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" Feb 17 08:55:28 crc kubenswrapper[4813]: I0217 08:55:28.730895 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d59ab30-e7e4-4056-a1ad-2ee71696466c-util\") pod \"69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg\" (UID: \"1d59ab30-e7e4-4056-a1ad-2ee71696466c\") " pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" Feb 17 08:55:28 crc kubenswrapper[4813]: I0217 08:55:28.758834 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9h7m\" (UniqueName: \"kubernetes.io/projected/1d59ab30-e7e4-4056-a1ad-2ee71696466c-kube-api-access-w9h7m\") pod \"69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg\" (UID: \"1d59ab30-e7e4-4056-a1ad-2ee71696466c\") " pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" Feb 17 08:55:28 crc kubenswrapper[4813]: I0217 08:55:28.794829 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" Feb 17 08:55:29 crc kubenswrapper[4813]: I0217 08:55:29.217285 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg"] Feb 17 08:55:29 crc kubenswrapper[4813]: I0217 08:55:29.431737 4813 generic.go:334] "Generic (PLEG): container finished" podID="1d59ab30-e7e4-4056-a1ad-2ee71696466c" containerID="be75692fb2796360efdbbf6a271dafda362b1a5b510f95c7588363016f06e3bc" exitCode=0 Feb 17 08:55:29 crc kubenswrapper[4813]: I0217 08:55:29.431817 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" event={"ID":"1d59ab30-e7e4-4056-a1ad-2ee71696466c","Type":"ContainerDied","Data":"be75692fb2796360efdbbf6a271dafda362b1a5b510f95c7588363016f06e3bc"} Feb 17 08:55:29 crc kubenswrapper[4813]: I0217 08:55:29.432031 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" event={"ID":"1d59ab30-e7e4-4056-a1ad-2ee71696466c","Type":"ContainerStarted","Data":"14f14f4d434effde0969182d2c385d687532221c59050cf0b817525b890b5945"} Feb 17 08:55:30 crc kubenswrapper[4813]: I0217 08:55:30.443838 4813 generic.go:334] "Generic (PLEG): container finished" podID="1d59ab30-e7e4-4056-a1ad-2ee71696466c" containerID="203d7d368ebe28eb869b95d31638a2447e26482fed46b962f4f189a37baf7da1" exitCode=0 Feb 17 08:55:30 crc kubenswrapper[4813]: I0217 08:55:30.443916 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" event={"ID":"1d59ab30-e7e4-4056-a1ad-2ee71696466c","Type":"ContainerDied","Data":"203d7d368ebe28eb869b95d31638a2447e26482fed46b962f4f189a37baf7da1"} Feb 17 08:55:31 crc kubenswrapper[4813]: I0217 08:55:31.451856 4813 generic.go:334] "Generic (PLEG): container finished" podID="1d59ab30-e7e4-4056-a1ad-2ee71696466c" containerID="b0c52294e6cf0b7b2699ec0200e31d98f8434881baa028a47cf235d76cb91d69" exitCode=0 Feb 17 08:55:31 crc kubenswrapper[4813]: I0217 08:55:31.451896 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" event={"ID":"1d59ab30-e7e4-4056-a1ad-2ee71696466c","Type":"ContainerDied","Data":"b0c52294e6cf0b7b2699ec0200e31d98f8434881baa028a47cf235d76cb91d69"} Feb 17 08:55:32 crc kubenswrapper[4813]: I0217 08:55:32.863589 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" Feb 17 08:55:32 crc kubenswrapper[4813]: I0217 08:55:32.993636 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9h7m\" (UniqueName: \"kubernetes.io/projected/1d59ab30-e7e4-4056-a1ad-2ee71696466c-kube-api-access-w9h7m\") pod \"1d59ab30-e7e4-4056-a1ad-2ee71696466c\" (UID: \"1d59ab30-e7e4-4056-a1ad-2ee71696466c\") " Feb 17 08:55:32 crc kubenswrapper[4813]: I0217 08:55:32.993784 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d59ab30-e7e4-4056-a1ad-2ee71696466c-bundle\") pod \"1d59ab30-e7e4-4056-a1ad-2ee71696466c\" (UID: \"1d59ab30-e7e4-4056-a1ad-2ee71696466c\") " Feb 17 08:55:32 crc kubenswrapper[4813]: I0217 08:55:32.993830 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d59ab30-e7e4-4056-a1ad-2ee71696466c-util\") pod \"1d59ab30-e7e4-4056-a1ad-2ee71696466c\" (UID: \"1d59ab30-e7e4-4056-a1ad-2ee71696466c\") " Feb 17 08:55:32 crc kubenswrapper[4813]: I0217 08:55:32.994826 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d59ab30-e7e4-4056-a1ad-2ee71696466c-bundle" (OuterVolumeSpecName: "bundle") pod "1d59ab30-e7e4-4056-a1ad-2ee71696466c" (UID: "1d59ab30-e7e4-4056-a1ad-2ee71696466c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:55:33 crc kubenswrapper[4813]: I0217 08:55:33.006682 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d59ab30-e7e4-4056-a1ad-2ee71696466c-kube-api-access-w9h7m" (OuterVolumeSpecName: "kube-api-access-w9h7m") pod "1d59ab30-e7e4-4056-a1ad-2ee71696466c" (UID: "1d59ab30-e7e4-4056-a1ad-2ee71696466c"). InnerVolumeSpecName "kube-api-access-w9h7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:55:33 crc kubenswrapper[4813]: I0217 08:55:33.026903 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d59ab30-e7e4-4056-a1ad-2ee71696466c-util" (OuterVolumeSpecName: "util") pod "1d59ab30-e7e4-4056-a1ad-2ee71696466c" (UID: "1d59ab30-e7e4-4056-a1ad-2ee71696466c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:55:33 crc kubenswrapper[4813]: I0217 08:55:33.095772 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9h7m\" (UniqueName: \"kubernetes.io/projected/1d59ab30-e7e4-4056-a1ad-2ee71696466c-kube-api-access-w9h7m\") on node \"crc\" DevicePath \"\"" Feb 17 08:55:33 crc kubenswrapper[4813]: I0217 08:55:33.095824 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d59ab30-e7e4-4056-a1ad-2ee71696466c-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:55:33 crc kubenswrapper[4813]: I0217 08:55:33.095842 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d59ab30-e7e4-4056-a1ad-2ee71696466c-util\") on node \"crc\" DevicePath \"\"" Feb 17 08:55:33 crc kubenswrapper[4813]: I0217 08:55:33.471243 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" event={"ID":"1d59ab30-e7e4-4056-a1ad-2ee71696466c","Type":"ContainerDied","Data":"14f14f4d434effde0969182d2c385d687532221c59050cf0b817525b890b5945"} Feb 17 08:55:33 crc kubenswrapper[4813]: I0217 08:55:33.471576 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14f14f4d434effde0969182d2c385d687532221c59050cf0b817525b890b5945" Feb 17 08:55:33 crc kubenswrapper[4813]: I0217 08:55:33.471374 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.005867 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pl2rx"] Feb 17 08:55:34 crc kubenswrapper[4813]: E0217 08:55:34.006298 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d59ab30-e7e4-4056-a1ad-2ee71696466c" containerName="extract" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.006327 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d59ab30-e7e4-4056-a1ad-2ee71696466c" containerName="extract" Feb 17 08:55:34 crc kubenswrapper[4813]: E0217 08:55:34.006338 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d59ab30-e7e4-4056-a1ad-2ee71696466c" containerName="pull" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.006344 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d59ab30-e7e4-4056-a1ad-2ee71696466c" containerName="pull" Feb 17 08:55:34 crc kubenswrapper[4813]: E0217 08:55:34.006365 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d59ab30-e7e4-4056-a1ad-2ee71696466c" containerName="util" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.006371 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d59ab30-e7e4-4056-a1ad-2ee71696466c" containerName="util" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.006476 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d59ab30-e7e4-4056-a1ad-2ee71696466c" containerName="extract" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.007228 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.028575 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pl2rx"] Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.110976 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-utilities\") pod \"certified-operators-pl2rx\" (UID: \"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5\") " pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.111436 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-catalog-content\") pod \"certified-operators-pl2rx\" (UID: \"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5\") " pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.111507 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7w9d\" (UniqueName: \"kubernetes.io/projected/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-kube-api-access-k7w9d\") pod \"certified-operators-pl2rx\" (UID: \"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5\") " pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.213071 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-catalog-content\") pod \"certified-operators-pl2rx\" (UID: \"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5\") " pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.213335 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7w9d\" (UniqueName: \"kubernetes.io/projected/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-kube-api-access-k7w9d\") pod \"certified-operators-pl2rx\" (UID: \"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5\") " pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.213447 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-utilities\") pod \"certified-operators-pl2rx\" (UID: \"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5\") " pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.213655 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-catalog-content\") pod \"certified-operators-pl2rx\" (UID: \"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5\") " pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.213974 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-utilities\") pod \"certified-operators-pl2rx\" (UID: \"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5\") " pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.240679 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7w9d\" (UniqueName: \"kubernetes.io/projected/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-kube-api-access-k7w9d\") pod \"certified-operators-pl2rx\" (UID: \"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5\") " pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.322183 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:34 crc kubenswrapper[4813]: I0217 08:55:34.544159 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pl2rx"] Feb 17 08:55:34 crc kubenswrapper[4813]: W0217 08:55:34.556989 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88bbd864_f01b_4d1c_9680_88ca5b2dc3c5.slice/crio-3b2bf95e2345f7beda581cc168d00925b58bb61b2551d435586497ea603087e9 WatchSource:0}: Error finding container 3b2bf95e2345f7beda581cc168d00925b58bb61b2551d435586497ea603087e9: Status 404 returned error can't find the container with id 3b2bf95e2345f7beda581cc168d00925b58bb61b2551d435586497ea603087e9 Feb 17 08:55:35 crc kubenswrapper[4813]: I0217 08:55:35.165577 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:55:35 crc kubenswrapper[4813]: I0217 08:55:35.165874 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:55:35 crc kubenswrapper[4813]: I0217 08:55:35.486675 4813 generic.go:334] "Generic (PLEG): container finished" podID="88bbd864-f01b-4d1c-9680-88ca5b2dc3c5" containerID="6dbee0df3cf58ae2311a78925aeecd3a9055ac82678df19e247faa10677c2ec4" exitCode=0 Feb 17 08:55:35 crc kubenswrapper[4813]: I0217 08:55:35.486766 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl2rx" event={"ID":"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5","Type":"ContainerDied","Data":"6dbee0df3cf58ae2311a78925aeecd3a9055ac82678df19e247faa10677c2ec4"} Feb 17 08:55:35 crc kubenswrapper[4813]: I0217 08:55:35.486841 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl2rx" event={"ID":"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5","Type":"ContainerStarted","Data":"3b2bf95e2345f7beda581cc168d00925b58bb61b2551d435586497ea603087e9"} Feb 17 08:55:36 crc kubenswrapper[4813]: I0217 08:55:36.493884 4813 generic.go:334] "Generic (PLEG): container finished" podID="88bbd864-f01b-4d1c-9680-88ca5b2dc3c5" containerID="3253fc6bf3c4079360fc2a1e9adf7f4dacbc7070ffec72f14740ee0ec9671533" exitCode=0 Feb 17 08:55:36 crc kubenswrapper[4813]: I0217 08:55:36.494127 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl2rx" event={"ID":"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5","Type":"ContainerDied","Data":"3253fc6bf3c4079360fc2a1e9adf7f4dacbc7070ffec72f14740ee0ec9671533"} Feb 17 08:55:37 crc kubenswrapper[4813]: I0217 08:55:37.503449 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl2rx" event={"ID":"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5","Type":"ContainerStarted","Data":"087ba477fe0f76bd3201c6b68a10b3dda43f0dd007ae8ca9feaaec9be01f3ec7"} Feb 17 08:55:37 crc kubenswrapper[4813]: I0217 08:55:37.537546 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pl2rx" podStartSLOduration=3.140714361 podStartE2EDuration="4.537524557s" podCreationTimestamp="2026-02-17 08:55:33 +0000 UTC" firstStartedPulling="2026-02-17 08:55:35.488604103 +0000 UTC m=+883.149365336" lastFinishedPulling="2026-02-17 08:55:36.885414299 +0000 UTC m=+884.546175532" observedRunningTime="2026-02-17 08:55:37.533483432 +0000 UTC m=+885.194244705" watchObservedRunningTime="2026-02-17 08:55:37.537524557 +0000 UTC m=+885.198285790" Feb 17 08:55:39 crc kubenswrapper[4813]: I0217 08:55:39.384765 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6"] Feb 17 08:55:39 crc kubenswrapper[4813]: I0217 08:55:39.386397 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6" Feb 17 08:55:39 crc kubenswrapper[4813]: I0217 08:55:39.389444 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-q7cm8" Feb 17 08:55:39 crc kubenswrapper[4813]: I0217 08:55:39.418565 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6"] Feb 17 08:55:39 crc kubenswrapper[4813]: I0217 08:55:39.504004 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrdrk\" (UniqueName: \"kubernetes.io/projected/572cdf9b-6953-4201-961f-5f2404993f44-kube-api-access-mrdrk\") pod \"openstack-operator-controller-init-776596fd4-6jgg6\" (UID: \"572cdf9b-6953-4201-961f-5f2404993f44\") " pod="openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6" Feb 17 08:55:39 crc kubenswrapper[4813]: I0217 08:55:39.605747 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrdrk\" (UniqueName: \"kubernetes.io/projected/572cdf9b-6953-4201-961f-5f2404993f44-kube-api-access-mrdrk\") pod \"openstack-operator-controller-init-776596fd4-6jgg6\" (UID: \"572cdf9b-6953-4201-961f-5f2404993f44\") " pod="openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6" Feb 17 08:55:39 crc kubenswrapper[4813]: I0217 08:55:39.631487 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrdrk\" (UniqueName: \"kubernetes.io/projected/572cdf9b-6953-4201-961f-5f2404993f44-kube-api-access-mrdrk\") pod \"openstack-operator-controller-init-776596fd4-6jgg6\" (UID: \"572cdf9b-6953-4201-961f-5f2404993f44\") " pod="openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6" Feb 17 08:55:39 crc kubenswrapper[4813]: I0217 08:55:39.703787 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6" Feb 17 08:55:40 crc kubenswrapper[4813]: I0217 08:55:40.180052 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6"] Feb 17 08:55:40 crc kubenswrapper[4813]: I0217 08:55:40.522292 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6" event={"ID":"572cdf9b-6953-4201-961f-5f2404993f44","Type":"ContainerStarted","Data":"50eb9fd709c2243b0489d2fcb5e00c68cf121c8ae4e9fad4bfc8b057dd936ab6"} Feb 17 08:55:44 crc kubenswrapper[4813]: I0217 08:55:44.323015 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:44 crc kubenswrapper[4813]: I0217 08:55:44.323767 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:44 crc kubenswrapper[4813]: I0217 08:55:44.402158 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:44 crc kubenswrapper[4813]: I0217 08:55:44.591477 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:45 crc kubenswrapper[4813]: I0217 08:55:45.561153 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6" event={"ID":"572cdf9b-6953-4201-961f-5f2404993f44","Type":"ContainerStarted","Data":"a2b4edf9e90fb2c76b88488f00a4fdfa7db49a57926772c06f06a729267ccbf9"} Feb 17 08:55:45 crc kubenswrapper[4813]: I0217 08:55:45.603473 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6" podStartSLOduration=2.139016559 podStartE2EDuration="6.603460333s" podCreationTimestamp="2026-02-17 08:55:39 +0000 UTC" firstStartedPulling="2026-02-17 08:55:40.18775341 +0000 UTC m=+887.848514633" lastFinishedPulling="2026-02-17 08:55:44.652197184 +0000 UTC m=+892.312958407" observedRunningTime="2026-02-17 08:55:45.602112605 +0000 UTC m=+893.262873828" watchObservedRunningTime="2026-02-17 08:55:45.603460333 +0000 UTC m=+893.264221556" Feb 17 08:55:46 crc kubenswrapper[4813]: I0217 08:55:46.568409 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6" Feb 17 08:55:46 crc kubenswrapper[4813]: I0217 08:55:46.802557 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pl2rx"] Feb 17 08:55:46 crc kubenswrapper[4813]: I0217 08:55:46.803008 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pl2rx" podUID="88bbd864-f01b-4d1c-9680-88ca5b2dc3c5" containerName="registry-server" containerID="cri-o://087ba477fe0f76bd3201c6b68a10b3dda43f0dd007ae8ca9feaaec9be01f3ec7" gracePeriod=2 Feb 17 08:55:47 crc kubenswrapper[4813]: I0217 08:55:47.592231 4813 generic.go:334] "Generic (PLEG): container finished" podID="88bbd864-f01b-4d1c-9680-88ca5b2dc3c5" containerID="087ba477fe0f76bd3201c6b68a10b3dda43f0dd007ae8ca9feaaec9be01f3ec7" exitCode=0 Feb 17 08:55:47 crc kubenswrapper[4813]: I0217 08:55:47.592345 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl2rx" event={"ID":"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5","Type":"ContainerDied","Data":"087ba477fe0f76bd3201c6b68a10b3dda43f0dd007ae8ca9feaaec9be01f3ec7"} Feb 17 08:55:47 crc kubenswrapper[4813]: I0217 08:55:47.780424 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:47 crc kubenswrapper[4813]: I0217 08:55:47.875347 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7w9d\" (UniqueName: \"kubernetes.io/projected/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-kube-api-access-k7w9d\") pod \"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5\" (UID: \"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5\") " Feb 17 08:55:47 crc kubenswrapper[4813]: I0217 08:55:47.875485 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-utilities\") pod \"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5\" (UID: \"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5\") " Feb 17 08:55:47 crc kubenswrapper[4813]: I0217 08:55:47.875625 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-catalog-content\") pod \"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5\" (UID: \"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5\") " Feb 17 08:55:47 crc kubenswrapper[4813]: I0217 08:55:47.876646 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-utilities" (OuterVolumeSpecName: "utilities") pod "88bbd864-f01b-4d1c-9680-88ca5b2dc3c5" (UID: "88bbd864-f01b-4d1c-9680-88ca5b2dc3c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:55:47 crc kubenswrapper[4813]: I0217 08:55:47.882656 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-kube-api-access-k7w9d" (OuterVolumeSpecName: "kube-api-access-k7w9d") pod "88bbd864-f01b-4d1c-9680-88ca5b2dc3c5" (UID: "88bbd864-f01b-4d1c-9680-88ca5b2dc3c5"). InnerVolumeSpecName "kube-api-access-k7w9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:55:47 crc kubenswrapper[4813]: I0217 08:55:47.934224 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88bbd864-f01b-4d1c-9680-88ca5b2dc3c5" (UID: "88bbd864-f01b-4d1c-9680-88ca5b2dc3c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:55:47 crc kubenswrapper[4813]: I0217 08:55:47.977605 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7w9d\" (UniqueName: \"kubernetes.io/projected/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-kube-api-access-k7w9d\") on node \"crc\" DevicePath \"\"" Feb 17 08:55:47 crc kubenswrapper[4813]: I0217 08:55:47.977649 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 08:55:47 crc kubenswrapper[4813]: I0217 08:55:47.977667 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 08:55:48 crc kubenswrapper[4813]: I0217 08:55:48.602553 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pl2rx" event={"ID":"88bbd864-f01b-4d1c-9680-88ca5b2dc3c5","Type":"ContainerDied","Data":"3b2bf95e2345f7beda581cc168d00925b58bb61b2551d435586497ea603087e9"} Feb 17 08:55:48 crc kubenswrapper[4813]: I0217 08:55:48.602638 4813 scope.go:117] "RemoveContainer" containerID="087ba477fe0f76bd3201c6b68a10b3dda43f0dd007ae8ca9feaaec9be01f3ec7" Feb 17 08:55:48 crc kubenswrapper[4813]: I0217 08:55:48.602644 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pl2rx" Feb 17 08:55:48 crc kubenswrapper[4813]: I0217 08:55:48.634956 4813 scope.go:117] "RemoveContainer" containerID="3253fc6bf3c4079360fc2a1e9adf7f4dacbc7070ffec72f14740ee0ec9671533" Feb 17 08:55:48 crc kubenswrapper[4813]: I0217 08:55:48.652936 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pl2rx"] Feb 17 08:55:48 crc kubenswrapper[4813]: I0217 08:55:48.658561 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pl2rx"] Feb 17 08:55:48 crc kubenswrapper[4813]: I0217 08:55:48.666194 4813 scope.go:117] "RemoveContainer" containerID="6dbee0df3cf58ae2311a78925aeecd3a9055ac82678df19e247faa10677c2ec4" Feb 17 08:55:49 crc kubenswrapper[4813]: I0217 08:55:49.120814 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88bbd864-f01b-4d1c-9680-88ca5b2dc3c5" path="/var/lib/kubelet/pods/88bbd864-f01b-4d1c-9680-88ca5b2dc3c5/volumes" Feb 17 08:55:49 crc kubenswrapper[4813]: I0217 08:55:49.707215 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.220356 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rsxzt"] Feb 17 08:55:59 crc kubenswrapper[4813]: E0217 08:55:59.221446 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88bbd864-f01b-4d1c-9680-88ca5b2dc3c5" containerName="extract-content" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.221470 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="88bbd864-f01b-4d1c-9680-88ca5b2dc3c5" containerName="extract-content" Feb 17 08:55:59 crc kubenswrapper[4813]: E0217 08:55:59.221494 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88bbd864-f01b-4d1c-9680-88ca5b2dc3c5" containerName="registry-server" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.221509 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="88bbd864-f01b-4d1c-9680-88ca5b2dc3c5" containerName="registry-server" Feb 17 08:55:59 crc kubenswrapper[4813]: E0217 08:55:59.221539 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88bbd864-f01b-4d1c-9680-88ca5b2dc3c5" containerName="extract-utilities" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.221551 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="88bbd864-f01b-4d1c-9680-88ca5b2dc3c5" containerName="extract-utilities" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.221768 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="88bbd864-f01b-4d1c-9680-88ca5b2dc3c5" containerName="registry-server" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.223262 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.246599 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rsxzt"] Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.332615 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-catalog-content\") pod \"community-operators-rsxzt\" (UID: \"f9a75c83-abd3-4a92-9ec5-9ed784340cb4\") " pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.332665 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwp8f\" (UniqueName: \"kubernetes.io/projected/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-kube-api-access-qwp8f\") pod \"community-operators-rsxzt\" (UID: \"f9a75c83-abd3-4a92-9ec5-9ed784340cb4\") " pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.332711 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-utilities\") pod \"community-operators-rsxzt\" (UID: \"f9a75c83-abd3-4a92-9ec5-9ed784340cb4\") " pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.433587 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwp8f\" (UniqueName: \"kubernetes.io/projected/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-kube-api-access-qwp8f\") pod \"community-operators-rsxzt\" (UID: \"f9a75c83-abd3-4a92-9ec5-9ed784340cb4\") " pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.433665 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-utilities\") pod \"community-operators-rsxzt\" (UID: \"f9a75c83-abd3-4a92-9ec5-9ed784340cb4\") " pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.433721 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-catalog-content\") pod \"community-operators-rsxzt\" (UID: \"f9a75c83-abd3-4a92-9ec5-9ed784340cb4\") " pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.434100 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-catalog-content\") pod \"community-operators-rsxzt\" (UID: \"f9a75c83-abd3-4a92-9ec5-9ed784340cb4\") " pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.434121 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-utilities\") pod \"community-operators-rsxzt\" (UID: \"f9a75c83-abd3-4a92-9ec5-9ed784340cb4\") " pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.459235 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwp8f\" (UniqueName: \"kubernetes.io/projected/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-kube-api-access-qwp8f\") pod \"community-operators-rsxzt\" (UID: \"f9a75c83-abd3-4a92-9ec5-9ed784340cb4\") " pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.551057 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:55:59 crc kubenswrapper[4813]: I0217 08:55:59.846130 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rsxzt"] Feb 17 08:56:00 crc kubenswrapper[4813]: I0217 08:56:00.687693 4813 generic.go:334] "Generic (PLEG): container finished" podID="f9a75c83-abd3-4a92-9ec5-9ed784340cb4" containerID="6b83e6ef0fb1cb0712eba5e9c3e0e5a748c8ca9e776221a92352f44a9a471a4d" exitCode=0 Feb 17 08:56:00 crc kubenswrapper[4813]: I0217 08:56:00.687789 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rsxzt" event={"ID":"f9a75c83-abd3-4a92-9ec5-9ed784340cb4","Type":"ContainerDied","Data":"6b83e6ef0fb1cb0712eba5e9c3e0e5a748c8ca9e776221a92352f44a9a471a4d"} Feb 17 08:56:00 crc kubenswrapper[4813]: I0217 08:56:00.687940 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rsxzt" event={"ID":"f9a75c83-abd3-4a92-9ec5-9ed784340cb4","Type":"ContainerStarted","Data":"0e55b1f848f145bdfdb140632fa19a9bed34e77b1af283833811bfe90eba2abb"} Feb 17 08:56:01 crc kubenswrapper[4813]: I0217 08:56:01.697005 4813 generic.go:334] "Generic (PLEG): container finished" podID="f9a75c83-abd3-4a92-9ec5-9ed784340cb4" containerID="bc9048b9d0167b5d64bb85743029721cc31133a5303f5aa939b9fde6f736433e" exitCode=0 Feb 17 08:56:01 crc kubenswrapper[4813]: I0217 08:56:01.697148 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rsxzt" event={"ID":"f9a75c83-abd3-4a92-9ec5-9ed784340cb4","Type":"ContainerDied","Data":"bc9048b9d0167b5d64bb85743029721cc31133a5303f5aa939b9fde6f736433e"} Feb 17 08:56:02 crc kubenswrapper[4813]: I0217 08:56:02.708554 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rsxzt" event={"ID":"f9a75c83-abd3-4a92-9ec5-9ed784340cb4","Type":"ContainerStarted","Data":"c3949c11ed2de037ad566ae19dc74553dcc4e1344c1e8d72778b289dd1b29370"} Feb 17 08:56:02 crc kubenswrapper[4813]: I0217 08:56:02.735110 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rsxzt" podStartSLOduration=2.322174637 podStartE2EDuration="3.735083154s" podCreationTimestamp="2026-02-17 08:55:59 +0000 UTC" firstStartedPulling="2026-02-17 08:56:00.689810027 +0000 UTC m=+908.350571250" lastFinishedPulling="2026-02-17 08:56:02.102718544 +0000 UTC m=+909.763479767" observedRunningTime="2026-02-17 08:56:02.728782035 +0000 UTC m=+910.389543308" watchObservedRunningTime="2026-02-17 08:56:02.735083154 +0000 UTC m=+910.395844417" Feb 17 08:56:05 crc kubenswrapper[4813]: I0217 08:56:05.165712 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:56:05 crc kubenswrapper[4813]: I0217 08:56:05.166008 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.551902 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.553141 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.620100 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.673994 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-hz7df"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.674880 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-hz7df" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.676681 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-m6tgk" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.677584 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-mkmzp"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.678328 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-mkmzp" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.680391 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-68xhm" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.683542 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-hz7df"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.688847 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-mkmzp"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.704292 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-r79hm"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.705074 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-r79hm" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.707720 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-ccrnt" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.730208 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-r79hm"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.736649 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-zhnzs"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.737456 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zhnzs" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.739417 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-6v2jw" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.750910 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-zhnzs"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.765959 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-c6plm"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.766808 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-c6plm" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.772035 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-hgcts" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.773490 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-c6plm"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.782441 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtd2b\" (UniqueName: \"kubernetes.io/projected/f8cae50b-944c-4dfd-8cae-5275b9290a07-kube-api-access-wtd2b\") pod \"designate-operator-controller-manager-55cc45767f-r79hm\" (UID: \"f8cae50b-944c-4dfd-8cae-5275b9290a07\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-r79hm" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.782539 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cbq4\" (UniqueName: \"kubernetes.io/projected/0626f4b2-1593-4b46-972d-079f3fe29ce3-kube-api-access-6cbq4\") pod \"glance-operator-controller-manager-68c6d499cb-zhnzs\" (UID: \"0626f4b2-1593-4b46-972d-079f3fe29ce3\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zhnzs" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.782564 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7clw\" (UniqueName: \"kubernetes.io/projected/0fdf5f90-ddf7-4c01-ba25-037628a298fb-kube-api-access-s7clw\") pod \"barbican-operator-controller-manager-c4b7d6946-hz7df\" (UID: \"0fdf5f90-ddf7-4c01-ba25-037628a298fb\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-hz7df" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.782584 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmq4z\" (UniqueName: \"kubernetes.io/projected/b58443c8-72d6-42ba-a920-9c11a9bc6b6e-kube-api-access-pmq4z\") pod \"cinder-operator-controller-manager-57746b5ff9-mkmzp\" (UID: \"b58443c8-72d6-42ba-a920-9c11a9bc6b6e\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-mkmzp" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.782656 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-vxh2x"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.783432 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-vxh2x" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.787366 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-n56zk" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.801390 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-vxh2x"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.801925 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.805361 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.806104 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.811816 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.811989 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9grmm" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.812554 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-h5hbm"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.813482 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-h5hbm" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.817661 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-8nksf" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.839957 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-k4qmx"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.841096 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-k4qmx" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.847592 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-z4md4" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.847723 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.850509 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-h5hbm"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.854498 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-cktr5"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.855328 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-cktr5" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.858186 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-jm89g" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.863061 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-8ppzs"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.863869 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-8ppzs" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.868082 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-k4qmx"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.868581 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-qg565" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.882198 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-cktr5"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.883155 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zrf4\" (UniqueName: \"kubernetes.io/projected/7e6fd8d2-9aeb-432a-9c01-e22332432a28-kube-api-access-9zrf4\") pod \"horizon-operator-controller-manager-54fb488b88-vxh2x\" (UID: \"7e6fd8d2-9aeb-432a-9c01-e22332432a28\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-vxh2x" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.883265 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bscfx\" (UniqueName: \"kubernetes.io/projected/5c6a587a-9a0b-458f-aea4-445dbcfdaecc-kube-api-access-bscfx\") pod \"ironic-operator-controller-manager-6494cdbf8f-h5hbm\" (UID: \"5c6a587a-9a0b-458f-aea4-445dbcfdaecc\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-h5hbm" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.883377 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert\") pod \"infra-operator-controller-manager-66d6b5f488-flpcz\" (UID: \"ff0f7626-5da3-4763-8ae6-714ede4a2445\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.883458 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnn9j\" (UniqueName: \"kubernetes.io/projected/daecd5b7-6576-4ddf-bb48-2131c26a9995-kube-api-access-xnn9j\") pod \"heat-operator-controller-manager-9595d6797-c6plm\" (UID: \"daecd5b7-6576-4ddf-bb48-2131c26a9995\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-c6plm" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.883545 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fksfj\" (UniqueName: \"kubernetes.io/projected/ff0f7626-5da3-4763-8ae6-714ede4a2445-kube-api-access-fksfj\") pod \"infra-operator-controller-manager-66d6b5f488-flpcz\" (UID: \"ff0f7626-5da3-4763-8ae6-714ede4a2445\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.883654 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cbq4\" (UniqueName: \"kubernetes.io/projected/0626f4b2-1593-4b46-972d-079f3fe29ce3-kube-api-access-6cbq4\") pod \"glance-operator-controller-manager-68c6d499cb-zhnzs\" (UID: \"0626f4b2-1593-4b46-972d-079f3fe29ce3\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zhnzs" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.883747 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7clw\" (UniqueName: \"kubernetes.io/projected/0fdf5f90-ddf7-4c01-ba25-037628a298fb-kube-api-access-s7clw\") pod \"barbican-operator-controller-manager-c4b7d6946-hz7df\" (UID: \"0fdf5f90-ddf7-4c01-ba25-037628a298fb\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-hz7df" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.883848 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmq4z\" (UniqueName: \"kubernetes.io/projected/b58443c8-72d6-42ba-a920-9c11a9bc6b6e-kube-api-access-pmq4z\") pod \"cinder-operator-controller-manager-57746b5ff9-mkmzp\" (UID: \"b58443c8-72d6-42ba-a920-9c11a9bc6b6e\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-mkmzp" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.883932 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtd2b\" (UniqueName: \"kubernetes.io/projected/f8cae50b-944c-4dfd-8cae-5275b9290a07-kube-api-access-wtd2b\") pod \"designate-operator-controller-manager-55cc45767f-r79hm\" (UID: \"f8cae50b-944c-4dfd-8cae-5275b9290a07\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-r79hm" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.884021 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfzb\" (UniqueName: \"kubernetes.io/projected/4bfccf88-ba10-4e4e-a6f8-d3d7a362990d-kube-api-access-hqfzb\") pod \"keystone-operator-controller-manager-6c78d668d5-k4qmx\" (UID: \"4bfccf88-ba10-4e4e-a6f8-d3d7a362990d\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-k4qmx" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.909668 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-7znk6"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.910670 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-7znk6" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.914689 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-5s5sd" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.936827 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtd2b\" (UniqueName: \"kubernetes.io/projected/f8cae50b-944c-4dfd-8cae-5275b9290a07-kube-api-access-wtd2b\") pod \"designate-operator-controller-manager-55cc45767f-r79hm\" (UID: \"f8cae50b-944c-4dfd-8cae-5275b9290a07\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-r79hm" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.937471 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cbq4\" (UniqueName: \"kubernetes.io/projected/0626f4b2-1593-4b46-972d-079f3fe29ce3-kube-api-access-6cbq4\") pod \"glance-operator-controller-manager-68c6d499cb-zhnzs\" (UID: \"0626f4b2-1593-4b46-972d-079f3fe29ce3\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zhnzs" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.937648 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmq4z\" (UniqueName: \"kubernetes.io/projected/b58443c8-72d6-42ba-a920-9c11a9bc6b6e-kube-api-access-pmq4z\") pod \"cinder-operator-controller-manager-57746b5ff9-mkmzp\" (UID: \"b58443c8-72d6-42ba-a920-9c11a9bc6b6e\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-mkmzp" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.946113 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-8ppzs"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.960909 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7clw\" (UniqueName: \"kubernetes.io/projected/0fdf5f90-ddf7-4c01-ba25-037628a298fb-kube-api-access-s7clw\") pod \"barbican-operator-controller-manager-c4b7d6946-hz7df\" (UID: \"0fdf5f90-ddf7-4c01-ba25-037628a298fb\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-hz7df" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.978675 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-7znk6"] Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.985277 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fksfj\" (UniqueName: \"kubernetes.io/projected/ff0f7626-5da3-4763-8ae6-714ede4a2445-kube-api-access-fksfj\") pod \"infra-operator-controller-manager-66d6b5f488-flpcz\" (UID: \"ff0f7626-5da3-4763-8ae6-714ede4a2445\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.986147 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rldnt\" (UniqueName: \"kubernetes.io/projected/d656660c-1dd3-4c91-9ef7-12248f1f388a-kube-api-access-rldnt\") pod \"neutron-operator-controller-manager-54967dbbdf-7znk6\" (UID: \"d656660c-1dd3-4c91-9ef7-12248f1f388a\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-7znk6" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.986237 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jttzq\" (UniqueName: \"kubernetes.io/projected/39e9a182-3baa-4d60-ac63-00d40443be7b-kube-api-access-jttzq\") pod \"manila-operator-controller-manager-96fff9cb8-cktr5\" (UID: \"39e9a182-3baa-4d60-ac63-00d40443be7b\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-cktr5" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.986348 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76kl5\" (UniqueName: \"kubernetes.io/projected/0a961df2-a4a7-431d-a389-1cafd967a0bc-kube-api-access-76kl5\") pod \"mariadb-operator-controller-manager-66997756f6-8ppzs\" (UID: \"0a961df2-a4a7-431d-a389-1cafd967a0bc\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-8ppzs" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.986429 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqfzb\" (UniqueName: \"kubernetes.io/projected/4bfccf88-ba10-4e4e-a6f8-d3d7a362990d-kube-api-access-hqfzb\") pod \"keystone-operator-controller-manager-6c78d668d5-k4qmx\" (UID: \"4bfccf88-ba10-4e4e-a6f8-d3d7a362990d\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-k4qmx" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.986521 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zrf4\" (UniqueName: \"kubernetes.io/projected/7e6fd8d2-9aeb-432a-9c01-e22332432a28-kube-api-access-9zrf4\") pod \"horizon-operator-controller-manager-54fb488b88-vxh2x\" (UID: \"7e6fd8d2-9aeb-432a-9c01-e22332432a28\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-vxh2x" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.986604 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bscfx\" (UniqueName: \"kubernetes.io/projected/5c6a587a-9a0b-458f-aea4-445dbcfdaecc-kube-api-access-bscfx\") pod \"ironic-operator-controller-manager-6494cdbf8f-h5hbm\" (UID: \"5c6a587a-9a0b-458f-aea4-445dbcfdaecc\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-h5hbm" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.986695 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert\") pod \"infra-operator-controller-manager-66d6b5f488-flpcz\" (UID: \"ff0f7626-5da3-4763-8ae6-714ede4a2445\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.986978 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnn9j\" (UniqueName: \"kubernetes.io/projected/daecd5b7-6576-4ddf-bb48-2131c26a9995-kube-api-access-xnn9j\") pod \"heat-operator-controller-manager-9595d6797-c6plm\" (UID: \"daecd5b7-6576-4ddf-bb48-2131c26a9995\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-c6plm" Feb 17 08:56:09 crc kubenswrapper[4813]: E0217 08:56:09.987389 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 08:56:09 crc kubenswrapper[4813]: E0217 08:56:09.987445 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert podName:ff0f7626-5da3-4763-8ae6-714ede4a2445 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:10.487426438 +0000 UTC m=+918.148187661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert") pod "infra-operator-controller-manager-66d6b5f488-flpcz" (UID: "ff0f7626-5da3-4763-8ae6-714ede4a2445") : secret "infra-operator-webhook-server-cert" not found Feb 17 08:56:09 crc kubenswrapper[4813]: I0217 08:56:09.993973 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-hz7df" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.010027 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-mkmzp" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.016698 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zrf4\" (UniqueName: \"kubernetes.io/projected/7e6fd8d2-9aeb-432a-9c01-e22332432a28-kube-api-access-9zrf4\") pod \"horizon-operator-controller-manager-54fb488b88-vxh2x\" (UID: \"7e6fd8d2-9aeb-432a-9c01-e22332432a28\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-vxh2x" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.021008 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-r79hm" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.025445 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqfzb\" (UniqueName: \"kubernetes.io/projected/4bfccf88-ba10-4e4e-a6f8-d3d7a362990d-kube-api-access-hqfzb\") pod \"keystone-operator-controller-manager-6c78d668d5-k4qmx\" (UID: \"4bfccf88-ba10-4e4e-a6f8-d3d7a362990d\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-k4qmx" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.026974 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bscfx\" (UniqueName: \"kubernetes.io/projected/5c6a587a-9a0b-458f-aea4-445dbcfdaecc-kube-api-access-bscfx\") pod \"ironic-operator-controller-manager-6494cdbf8f-h5hbm\" (UID: \"5c6a587a-9a0b-458f-aea4-445dbcfdaecc\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-h5hbm" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.027415 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnn9j\" (UniqueName: \"kubernetes.io/projected/daecd5b7-6576-4ddf-bb48-2131c26a9995-kube-api-access-xnn9j\") pod \"heat-operator-controller-manager-9595d6797-c6plm\" (UID: \"daecd5b7-6576-4ddf-bb48-2131c26a9995\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-c6plm" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.050037 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fksfj\" (UniqueName: \"kubernetes.io/projected/ff0f7626-5da3-4763-8ae6-714ede4a2445-kube-api-access-fksfj\") pod \"infra-operator-controller-manager-66d6b5f488-flpcz\" (UID: \"ff0f7626-5da3-4763-8ae6-714ede4a2445\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.055647 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zhnzs" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.057926 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rsxzt"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.069836 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-wbbfl"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.071320 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wbbfl" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.076798 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rx27j" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.088139 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rldnt\" (UniqueName: \"kubernetes.io/projected/d656660c-1dd3-4c91-9ef7-12248f1f388a-kube-api-access-rldnt\") pod \"neutron-operator-controller-manager-54967dbbdf-7znk6\" (UID: \"d656660c-1dd3-4c91-9ef7-12248f1f388a\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-7znk6" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.088174 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jttzq\" (UniqueName: \"kubernetes.io/projected/39e9a182-3baa-4d60-ac63-00d40443be7b-kube-api-access-jttzq\") pod \"manila-operator-controller-manager-96fff9cb8-cktr5\" (UID: \"39e9a182-3baa-4d60-ac63-00d40443be7b\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-cktr5" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.088198 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76kl5\" (UniqueName: \"kubernetes.io/projected/0a961df2-a4a7-431d-a389-1cafd967a0bc-kube-api-access-76kl5\") pod \"mariadb-operator-controller-manager-66997756f6-8ppzs\" (UID: \"0a961df2-a4a7-431d-a389-1cafd967a0bc\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-8ppzs" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.103091 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-c6plm" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.112219 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-84kxx"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.113064 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-wbbfl"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.113140 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-84kxx" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.113462 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76kl5\" (UniqueName: \"kubernetes.io/projected/0a961df2-a4a7-431d-a389-1cafd967a0bc-kube-api-access-76kl5\") pod \"mariadb-operator-controller-manager-66997756f6-8ppzs\" (UID: \"0a961df2-a4a7-431d-a389-1cafd967a0bc\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-8ppzs" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.115165 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-qh5fj" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.115938 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rldnt\" (UniqueName: \"kubernetes.io/projected/d656660c-1dd3-4c91-9ef7-12248f1f388a-kube-api-access-rldnt\") pod \"neutron-operator-controller-manager-54967dbbdf-7znk6\" (UID: \"d656660c-1dd3-4c91-9ef7-12248f1f388a\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-7znk6" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.117741 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-vxh2x" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.126674 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jttzq\" (UniqueName: \"kubernetes.io/projected/39e9a182-3baa-4d60-ac63-00d40443be7b-kube-api-access-jttzq\") pod \"manila-operator-controller-manager-96fff9cb8-cktr5\" (UID: \"39e9a182-3baa-4d60-ac63-00d40443be7b\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-cktr5" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.127475 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-84kxx"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.145450 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-h5hbm" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.157558 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-k4qmx" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.157637 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.158735 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.162992 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.164051 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-447nr" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.167450 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-zzj6k"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.168399 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-zzj6k" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.174713 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-75tzv" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.181763 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-cktr5" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.193147 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.193608 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-8ppzs" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.193843 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrs4d\" (UniqueName: \"kubernetes.io/projected/08d18b9c-b137-4735-9e80-95636feac4ed-kube-api-access-nrs4d\") pod \"octavia-operator-controller-manager-745bbbd77b-84kxx\" (UID: \"08d18b9c-b137-4735-9e80-95636feac4ed\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-84kxx" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.194050 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwfb4\" (UniqueName: \"kubernetes.io/projected/28206bb6-553c-4ccd-bb15-8c42c7f34415-kube-api-access-cwfb4\") pod \"nova-operator-controller-manager-5ddd85db87-wbbfl\" (UID: \"28206bb6-553c-4ccd-bb15-8c42c7f34415\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wbbfl" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.217860 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-zzj6k"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.229913 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-4sccr"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.230774 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-w9mll"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.231447 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-w9mll" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.231890 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-4sccr" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.236737 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zt4w6" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.237123 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7nwgc" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.258773 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-4sccr"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.289223 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-w9mll"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.295951 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb2c2\" (UniqueName: \"kubernetes.io/projected/c4ef02fa-778f-4072-b15c-a8e98631c083-kube-api-access-nb2c2\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m\" (UID: \"c4ef02fa-778f-4072-b15c-a8e98631c083\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.296009 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnq5b\" (UniqueName: \"kubernetes.io/projected/6451fc3e-e020-442e-b5d2-7e1094379337-kube-api-access-tnq5b\") pod \"ovn-operator-controller-manager-85c99d655-zzj6k\" (UID: \"6451fc3e-e020-442e-b5d2-7e1094379337\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-zzj6k" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.296045 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmvz9\" (UniqueName: \"kubernetes.io/projected/d633c51f-1eea-4111-a46d-199e2f203c14-kube-api-access-pmvz9\") pod \"swift-operator-controller-manager-79558bbfbf-w9mll\" (UID: \"d633c51f-1eea-4111-a46d-199e2f203c14\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-w9mll" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.296198 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrs4d\" (UniqueName: \"kubernetes.io/projected/08d18b9c-b137-4735-9e80-95636feac4ed-kube-api-access-nrs4d\") pod \"octavia-operator-controller-manager-745bbbd77b-84kxx\" (UID: \"08d18b9c-b137-4735-9e80-95636feac4ed\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-84kxx" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.296395 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwfb4\" (UniqueName: \"kubernetes.io/projected/28206bb6-553c-4ccd-bb15-8c42c7f34415-kube-api-access-cwfb4\") pod \"nova-operator-controller-manager-5ddd85db87-wbbfl\" (UID: \"28206bb6-553c-4ccd-bb15-8c42c7f34415\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wbbfl" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.296451 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m\" (UID: \"c4ef02fa-778f-4072-b15c-a8e98631c083\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.296513 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lvll\" (UniqueName: \"kubernetes.io/projected/8e7a72dd-76e6-47b0-8d51-aad9504620c0-kube-api-access-6lvll\") pod \"placement-operator-controller-manager-57bd55f9b7-4sccr\" (UID: \"8e7a72dd-76e6-47b0-8d51-aad9504620c0\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-4sccr" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.301212 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-dljh7"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.302235 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-dljh7" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.314342 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-vwkx2" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.319764 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwfb4\" (UniqueName: \"kubernetes.io/projected/28206bb6-553c-4ccd-bb15-8c42c7f34415-kube-api-access-cwfb4\") pod \"nova-operator-controller-manager-5ddd85db87-wbbfl\" (UID: \"28206bb6-553c-4ccd-bb15-8c42c7f34415\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wbbfl" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.326961 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrs4d\" (UniqueName: \"kubernetes.io/projected/08d18b9c-b137-4735-9e80-95636feac4ed-kube-api-access-nrs4d\") pod \"octavia-operator-controller-manager-745bbbd77b-84kxx\" (UID: \"08d18b9c-b137-4735-9e80-95636feac4ed\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-84kxx" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.359424 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-dljh7"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.359912 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-7znk6" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.380369 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-gwmhd"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.381245 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gwmhd" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.384626 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-rqn8c" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.384781 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.387199 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-gwmhd"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.399574 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m\" (UID: \"c4ef02fa-778f-4072-b15c-a8e98631c083\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.399620 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lvll\" (UniqueName: \"kubernetes.io/projected/8e7a72dd-76e6-47b0-8d51-aad9504620c0-kube-api-access-6lvll\") pod \"placement-operator-controller-manager-57bd55f9b7-4sccr\" (UID: \"8e7a72dd-76e6-47b0-8d51-aad9504620c0\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-4sccr" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.399651 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb2c2\" (UniqueName: \"kubernetes.io/projected/c4ef02fa-778f-4072-b15c-a8e98631c083-kube-api-access-nb2c2\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m\" (UID: \"c4ef02fa-778f-4072-b15c-a8e98631c083\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.399672 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnq5b\" (UniqueName: \"kubernetes.io/projected/6451fc3e-e020-442e-b5d2-7e1094379337-kube-api-access-tnq5b\") pod \"ovn-operator-controller-manager-85c99d655-zzj6k\" (UID: \"6451fc3e-e020-442e-b5d2-7e1094379337\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-zzj6k" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.399694 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x49l\" (UniqueName: \"kubernetes.io/projected/cfd95ad0-3c25-4884-a5fa-d91d1f771c1e-kube-api-access-2x49l\") pod \"telemetry-operator-controller-manager-56dc67d744-dljh7\" (UID: \"cfd95ad0-3c25-4884-a5fa-d91d1f771c1e\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-dljh7" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.399722 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmvz9\" (UniqueName: \"kubernetes.io/projected/d633c51f-1eea-4111-a46d-199e2f203c14-kube-api-access-pmvz9\") pod \"swift-operator-controller-manager-79558bbfbf-w9mll\" (UID: \"d633c51f-1eea-4111-a46d-199e2f203c14\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-w9mll" Feb 17 08:56:10 crc kubenswrapper[4813]: E0217 08:56:10.400332 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 08:56:10 crc kubenswrapper[4813]: E0217 08:56:10.400371 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert podName:c4ef02fa-778f-4072-b15c-a8e98631c083 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:10.900357955 +0000 UTC m=+918.561119178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" (UID: "c4ef02fa-778f-4072-b15c-a8e98631c083") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.404911 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.405755 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.417094 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-znkld" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.419838 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.435095 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wbbfl" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.460170 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb2c2\" (UniqueName: \"kubernetes.io/projected/c4ef02fa-778f-4072-b15c-a8e98631c083-kube-api-access-nb2c2\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m\" (UID: \"c4ef02fa-778f-4072-b15c-a8e98631c083\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.467743 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnq5b\" (UniqueName: \"kubernetes.io/projected/6451fc3e-e020-442e-b5d2-7e1094379337-kube-api-access-tnq5b\") pod \"ovn-operator-controller-manager-85c99d655-zzj6k\" (UID: \"6451fc3e-e020-442e-b5d2-7e1094379337\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-zzj6k" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.471800 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmvz9\" (UniqueName: \"kubernetes.io/projected/d633c51f-1eea-4111-a46d-199e2f203c14-kube-api-access-pmvz9\") pod \"swift-operator-controller-manager-79558bbfbf-w9mll\" (UID: \"d633c51f-1eea-4111-a46d-199e2f203c14\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-w9mll" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.479010 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lvll\" (UniqueName: \"kubernetes.io/projected/8e7a72dd-76e6-47b0-8d51-aad9504620c0-kube-api-access-6lvll\") pod \"placement-operator-controller-manager-57bd55f9b7-4sccr\" (UID: \"8e7a72dd-76e6-47b0-8d51-aad9504620c0\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-4sccr" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.482778 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-mkmzp"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.500977 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w55q\" (UniqueName: \"kubernetes.io/projected/1274f46d-7df1-478f-ad9f-4df095082c3a-kube-api-access-6w55q\") pod \"test-operator-controller-manager-8467ccb4c8-gwmhd\" (UID: \"1274f46d-7df1-478f-ad9f-4df095082c3a\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gwmhd" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.501060 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert\") pod \"infra-operator-controller-manager-66d6b5f488-flpcz\" (UID: \"ff0f7626-5da3-4763-8ae6-714ede4a2445\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.501090 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x49l\" (UniqueName: \"kubernetes.io/projected/cfd95ad0-3c25-4884-a5fa-d91d1f771c1e-kube-api-access-2x49l\") pod \"telemetry-operator-controller-manager-56dc67d744-dljh7\" (UID: \"cfd95ad0-3c25-4884-a5fa-d91d1f771c1e\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-dljh7" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.501142 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp79j\" (UniqueName: \"kubernetes.io/projected/f7cbb633-768c-4ab3-9243-252a24046c73-kube-api-access-zp79j\") pod \"watcher-operator-controller-manager-56dcfd7757-sf2sv\" (UID: \"f7cbb633-768c-4ab3-9243-252a24046c73\") " pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" Feb 17 08:56:10 crc kubenswrapper[4813]: E0217 08:56:10.501291 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 08:56:10 crc kubenswrapper[4813]: E0217 08:56:10.501349 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert podName:ff0f7626-5da3-4763-8ae6-714ede4a2445 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:11.501334898 +0000 UTC m=+919.162096121 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert") pod "infra-operator-controller-manager-66d6b5f488-flpcz" (UID: "ff0f7626-5da3-4763-8ae6-714ede4a2445") : secret "infra-operator-webhook-server-cert" not found Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.505704 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-84kxx" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.533901 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x49l\" (UniqueName: \"kubernetes.io/projected/cfd95ad0-3c25-4884-a5fa-d91d1f771c1e-kube-api-access-2x49l\") pod \"telemetry-operator-controller-manager-56dc67d744-dljh7\" (UID: \"cfd95ad0-3c25-4884-a5fa-d91d1f771c1e\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-dljh7" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.540565 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-zzj6k" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.547467 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.548656 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.555906 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.556713 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.584515 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lh6bn" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.584980 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-w9mll" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.589534 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.633801 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w55q\" (UniqueName: \"kubernetes.io/projected/1274f46d-7df1-478f-ad9f-4df095082c3a-kube-api-access-6w55q\") pod \"test-operator-controller-manager-8467ccb4c8-gwmhd\" (UID: \"1274f46d-7df1-478f-ad9f-4df095082c3a\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gwmhd" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.634155 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.634218 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7vrw\" (UniqueName: \"kubernetes.io/projected/d9a0e392-aeea-4033-939e-52e42ebf3fa5-kube-api-access-k7vrw\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.634265 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp79j\" (UniqueName: \"kubernetes.io/projected/f7cbb633-768c-4ab3-9243-252a24046c73-kube-api-access-zp79j\") pod \"watcher-operator-controller-manager-56dcfd7757-sf2sv\" (UID: \"f7cbb633-768c-4ab3-9243-252a24046c73\") " pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.634289 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.642769 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-4sccr" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.644034 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-dljh7" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.697017 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp79j\" (UniqueName: \"kubernetes.io/projected/f7cbb633-768c-4ab3-9243-252a24046c73-kube-api-access-zp79j\") pod \"watcher-operator-controller-manager-56dcfd7757-sf2sv\" (UID: \"f7cbb633-768c-4ab3-9243-252a24046c73\") " pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.715863 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w55q\" (UniqueName: \"kubernetes.io/projected/1274f46d-7df1-478f-ad9f-4df095082c3a-kube-api-access-6w55q\") pod \"test-operator-controller-manager-8467ccb4c8-gwmhd\" (UID: \"1274f46d-7df1-478f-ad9f-4df095082c3a\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gwmhd" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.742649 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.742710 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7vrw\" (UniqueName: \"kubernetes.io/projected/d9a0e392-aeea-4033-939e-52e42ebf3fa5-kube-api-access-k7vrw\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.742753 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:10 crc kubenswrapper[4813]: E0217 08:56:10.742954 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 08:56:10 crc kubenswrapper[4813]: E0217 08:56:10.743006 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs podName:d9a0e392-aeea-4033-939e-52e42ebf3fa5 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:11.242993163 +0000 UTC m=+918.903754386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs") pod "openstack-operator-controller-manager-6bf8b7b945-pqgpx" (UID: "d9a0e392-aeea-4033-939e-52e42ebf3fa5") : secret "metrics-server-cert" not found Feb 17 08:56:10 crc kubenswrapper[4813]: E0217 08:56:10.743301 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 08:56:10 crc kubenswrapper[4813]: E0217 08:56:10.744551 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs podName:d9a0e392-aeea-4033-939e-52e42ebf3fa5 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:11.244535127 +0000 UTC m=+918.905296350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs") pod "openstack-operator-controller-manager-6bf8b7b945-pqgpx" (UID: "d9a0e392-aeea-4033-939e-52e42ebf3fa5") : secret "webhook-server-cert" not found Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.757283 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gwmhd" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.784714 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.785956 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7vrw\" (UniqueName: \"kubernetes.io/projected/d9a0e392-aeea-4033-939e-52e42ebf3fa5-kube-api-access-k7vrw\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.786048 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-mkmzp" event={"ID":"b58443c8-72d6-42ba-a920-9c11a9bc6b6e","Type":"ContainerStarted","Data":"bf07a8363da4bba0a021ff5798a626173787d7066b40ef88aa5909942333f74a"} Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.788368 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xlfjs"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.789279 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xlfjs" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.793202 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bjzf8" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.809765 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xlfjs"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.823384 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-r79hm"] Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.949152 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m\" (UID: \"c4ef02fa-778f-4072-b15c-a8e98631c083\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" Feb 17 08:56:10 crc kubenswrapper[4813]: I0217 08:56:10.949230 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wphd\" (UniqueName: \"kubernetes.io/projected/c2c4389e-21f4-4c70-9bc5-2eb9b93ad2cf-kube-api-access-5wphd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xlfjs\" (UID: \"c2c4389e-21f4-4c70-9bc5-2eb9b93ad2cf\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xlfjs" Feb 17 08:56:10 crc kubenswrapper[4813]: E0217 08:56:10.950153 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 08:56:10 crc kubenswrapper[4813]: E0217 08:56:10.950234 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert podName:c4ef02fa-778f-4072-b15c-a8e98631c083 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:11.950215328 +0000 UTC m=+919.610976551 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" (UID: "c4ef02fa-778f-4072-b15c-a8e98631c083") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.050095 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wphd\" (UniqueName: \"kubernetes.io/projected/c2c4389e-21f4-4c70-9bc5-2eb9b93ad2cf-kube-api-access-5wphd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xlfjs\" (UID: \"c2c4389e-21f4-4c70-9bc5-2eb9b93ad2cf\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xlfjs" Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.051254 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-hz7df"] Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.088125 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wphd\" (UniqueName: \"kubernetes.io/projected/c2c4389e-21f4-4c70-9bc5-2eb9b93ad2cf-kube-api-access-5wphd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xlfjs\" (UID: \"c2c4389e-21f4-4c70-9bc5-2eb9b93ad2cf\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xlfjs" Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.119665 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xlfjs" Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.177752 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-zhnzs"] Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.243567 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-c6plm"] Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.255418 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.255578 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:11 crc kubenswrapper[4813]: E0217 08:56:11.256253 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 08:56:11 crc kubenswrapper[4813]: E0217 08:56:11.256326 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs podName:d9a0e392-aeea-4033-939e-52e42ebf3fa5 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:12.256291296 +0000 UTC m=+919.917052519 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs") pod "openstack-operator-controller-manager-6bf8b7b945-pqgpx" (UID: "d9a0e392-aeea-4033-939e-52e42ebf3fa5") : secret "metrics-server-cert" not found Feb 17 08:56:11 crc kubenswrapper[4813]: E0217 08:56:11.256728 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 08:56:11 crc kubenswrapper[4813]: E0217 08:56:11.256759 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs podName:d9a0e392-aeea-4033-939e-52e42ebf3fa5 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:12.256751659 +0000 UTC m=+919.917512882 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs") pod "openstack-operator-controller-manager-6bf8b7b945-pqgpx" (UID: "d9a0e392-aeea-4033-939e-52e42ebf3fa5") : secret "webhook-server-cert" not found Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.563979 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert\") pod \"infra-operator-controller-manager-66d6b5f488-flpcz\" (UID: \"ff0f7626-5da3-4763-8ae6-714ede4a2445\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" Feb 17 08:56:11 crc kubenswrapper[4813]: E0217 08:56:11.564160 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 08:56:11 crc kubenswrapper[4813]: E0217 08:56:11.564371 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert podName:ff0f7626-5da3-4763-8ae6-714ede4a2445 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:13.56435256 +0000 UTC m=+921.225113783 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert") pod "infra-operator-controller-manager-66d6b5f488-flpcz" (UID: "ff0f7626-5da3-4763-8ae6-714ede4a2445") : secret "infra-operator-webhook-server-cert" not found Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.640588 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-k4qmx"] Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.674113 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-8ppzs"] Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.686521 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-cktr5"] Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.692788 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-vxh2x"] Feb 17 08:56:11 crc kubenswrapper[4813]: W0217 08:56:11.693099 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a961df2_a4a7_431d_a389_1cafd967a0bc.slice/crio-0caf035b54aa2d8d3065c6bf23ea56dd1f9741bdde2d78b3f6a70fe604985f06 WatchSource:0}: Error finding container 0caf035b54aa2d8d3065c6bf23ea56dd1f9741bdde2d78b3f6a70fe604985f06: Status 404 returned error can't find the container with id 0caf035b54aa2d8d3065c6bf23ea56dd1f9741bdde2d78b3f6a70fe604985f06 Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.698179 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-h5hbm"] Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.715922 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-84kxx"] Feb 17 08:56:11 crc kubenswrapper[4813]: W0217 08:56:11.723590 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd633c51f_1eea_4111_a46d_199e2f203c14.slice/crio-d6be207a90677336fd9f3ef2d675b83ce6a6ecfc0752734b1fa097a5e8dac480 WatchSource:0}: Error finding container d6be207a90677336fd9f3ef2d675b83ce6a6ecfc0752734b1fa097a5e8dac480: Status 404 returned error can't find the container with id d6be207a90677336fd9f3ef2d675b83ce6a6ecfc0752734b1fa097a5e8dac480 Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.723627 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-7znk6"] Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.744127 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-w9mll"] Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.760298 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-wbbfl"] Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.795781 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-4sccr"] Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.803796 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-dljh7"] Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.811131 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-h5hbm" event={"ID":"5c6a587a-9a0b-458f-aea4-445dbcfdaecc","Type":"ContainerStarted","Data":"91d6e2e1a1a60d5d3aca202e888c3ca6162d0f5bcea4e9c552f201fc18c7358a"} Feb 17 08:56:11 crc kubenswrapper[4813]: W0217 08:56:11.813904 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfd95ad0_3c25_4884_a5fa_d91d1f771c1e.slice/crio-834202c6da496818efe0fe1f4d3e519d95cd0fcc57da5be309ef2e422785c3a3 WatchSource:0}: Error finding container 834202c6da496818efe0fe1f4d3e519d95cd0fcc57da5be309ef2e422785c3a3: Status 404 returned error can't find the container with id 834202c6da496818efe0fe1f4d3e519d95cd0fcc57da5be309ef2e422785c3a3 Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.814814 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wbbfl" event={"ID":"28206bb6-553c-4ccd-bb15-8c42c7f34415","Type":"ContainerStarted","Data":"81bb7ba891b51121785a8917a48e6b6388e6104d130069171ecedb4b075673ae"} Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.816005 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-vxh2x" event={"ID":"7e6fd8d2-9aeb-432a-9c01-e22332432a28","Type":"ContainerStarted","Data":"bcd45b8dc539a588cf4ce48f2cb99b8829e1c07f434c081069ee31c60516c5c3"} Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.817351 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-k4qmx" event={"ID":"4bfccf88-ba10-4e4e-a6f8-d3d7a362990d","Type":"ContainerStarted","Data":"d6425aa4aa5eb0b143a864d4e838e076eef39d14efc14571358830fe542ee29a"} Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.818370 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-c6plm" event={"ID":"daecd5b7-6576-4ddf-bb48-2131c26a9995","Type":"ContainerStarted","Data":"8e05671fc065ae8bdcc855aba83de8d70fd507812ae67d92005f2561dcc703cf"} Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.819094 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-84kxx" event={"ID":"08d18b9c-b137-4735-9e80-95636feac4ed","Type":"ContainerStarted","Data":"3704e5d0c054dc134d158341156b552fcdfcd06750013d6b5eb66f4bc8e90cb1"} Feb 17 08:56:11 crc kubenswrapper[4813]: W0217 08:56:11.820217 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2c4389e_21f4_4c70_9bc5_2eb9b93ad2cf.slice/crio-d69e5555d4cb0316e8c6a0c18c98d54e7ce985759ea33aae541caac6494e3d04 WatchSource:0}: Error finding container d69e5555d4cb0316e8c6a0c18c98d54e7ce985759ea33aae541caac6494e3d04: Status 404 returned error can't find the container with id d69e5555d4cb0316e8c6a0c18c98d54e7ce985759ea33aae541caac6494e3d04 Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.822112 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-w9mll" event={"ID":"d633c51f-1eea-4111-a46d-199e2f203c14","Type":"ContainerStarted","Data":"d6be207a90677336fd9f3ef2d675b83ce6a6ecfc0752734b1fa097a5e8dac480"} Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.823387 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-7znk6" event={"ID":"d656660c-1dd3-4c91-9ef7-12248f1f388a","Type":"ContainerStarted","Data":"1582764eb7dbc9e40b19d6217e77f13ddd7fd156b5a02b43db8d4e9a12720033"} Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.823683 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xlfjs"] Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.824694 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-cktr5" event={"ID":"39e9a182-3baa-4d60-ac63-00d40443be7b","Type":"ContainerStarted","Data":"70b92db097efb8b98688ea2f153e88de75c1fed2ff60f9c17dae97c96fed6e0e"} Feb 17 08:56:11 crc kubenswrapper[4813]: E0217 08:56:11.829952 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5wphd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-xlfjs_openstack-operators(c2c4389e-21f4-4c70-9bc5-2eb9b93ad2cf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.830413 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-r79hm" event={"ID":"f8cae50b-944c-4dfd-8cae-5275b9290a07","Type":"ContainerStarted","Data":"a8ba01547e2a17d03df11beacd8e13aa14982cbf475115a7139f96ead86036ab"} Feb 17 08:56:11 crc kubenswrapper[4813]: E0217 08:56:11.831200 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xlfjs" podUID="c2c4389e-21f4-4c70-9bc5-2eb9b93ad2cf" Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.832260 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-zzj6k"] Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.833039 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zhnzs" event={"ID":"0626f4b2-1593-4b46-972d-079f3fe29ce3","Type":"ContainerStarted","Data":"b0a205e7df79ac2514544203f8a72e3b6e27abf4d6c210aa42eb7791a5117ca4"} Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.836297 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-8ppzs" event={"ID":"0a961df2-a4a7-431d-a389-1cafd967a0bc","Type":"ContainerStarted","Data":"0caf035b54aa2d8d3065c6bf23ea56dd1f9741bdde2d78b3f6a70fe604985f06"} Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.843520 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rsxzt" podUID="f9a75c83-abd3-4a92-9ec5-9ed784340cb4" containerName="registry-server" containerID="cri-o://c3949c11ed2de037ad566ae19dc74553dcc4e1344c1e8d72778b289dd1b29370" gracePeriod=2 Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.843642 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-hz7df" event={"ID":"0fdf5f90-ddf7-4c01-ba25-037628a298fb","Type":"ContainerStarted","Data":"585c622b185569f3106d4a222481815428ec9755e0dd8ce8d6a1bfd653d813f4"} Feb 17 08:56:11 crc kubenswrapper[4813]: W0217 08:56:11.849437 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e7a72dd_76e6_47b0_8d51_aad9504620c0.slice/crio-69d21d96b1b090a4468cc832a9646072bc782b24c45d058d1acdcf8cba357177 WatchSource:0}: Error finding container 69d21d96b1b090a4468cc832a9646072bc782b24c45d058d1acdcf8cba357177: Status 404 returned error can't find the container with id 69d21d96b1b090a4468cc832a9646072bc782b24c45d058d1acdcf8cba357177 Feb 17 08:56:11 crc kubenswrapper[4813]: E0217 08:56:11.853322 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6lvll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57bd55f9b7-4sccr_openstack-operators(8e7a72dd-76e6-47b0-8d51-aad9504620c0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 08:56:11 crc kubenswrapper[4813]: E0217 08:56:11.854405 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-4sccr" podUID="8e7a72dd-76e6-47b0-8d51-aad9504620c0" Feb 17 08:56:11 crc kubenswrapper[4813]: E0217 08:56:11.866853 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:4d3b6d259005ea30eee9c134d5fdf3d67eaacad8568ed105a34674e510086816,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tnq5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-85c99d655-zzj6k_openstack-operators(6451fc3e-e020-442e-b5d2-7e1094379337): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 08:56:11 crc kubenswrapper[4813]: E0217 08:56:11.868485 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-zzj6k" podUID="6451fc3e-e020-442e-b5d2-7e1094379337" Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.971981 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-gwmhd"] Feb 17 08:56:11 crc kubenswrapper[4813]: I0217 08:56:11.980796 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m\" (UID: \"c4ef02fa-778f-4072-b15c-a8e98631c083\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" Feb 17 08:56:11 crc kubenswrapper[4813]: E0217 08:56:11.981001 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 08:56:11 crc kubenswrapper[4813]: E0217 08:56:11.981050 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert podName:c4ef02fa-778f-4072-b15c-a8e98631c083 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:13.981033525 +0000 UTC m=+921.641794748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" (UID: "c4ef02fa-778f-4072-b15c-a8e98631c083") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.002759 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv"] Feb 17 08:56:12 crc kubenswrapper[4813]: W0217 08:56:12.050878 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1274f46d_7df1_478f_ad9f_4df095082c3a.slice/crio-6530d965b29dcc3fb1b78e28a9d228790051c0b82de17192d53f7e1c8772de58 WatchSource:0}: Error finding container 6530d965b29dcc3fb1b78e28a9d228790051c0b82de17192d53f7e1c8772de58: Status 404 returned error can't find the container with id 6530d965b29dcc3fb1b78e28a9d228790051c0b82de17192d53f7e1c8772de58 Feb 17 08:56:12 crc kubenswrapper[4813]: W0217 08:56:12.054634 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7cbb633_768c_4ab3_9243_252a24046c73.slice/crio-3257d1e4df7d36e066eeccbc619ff44565526c6449890b11531c1884ae8a911d WatchSource:0}: Error finding container 3257d1e4df7d36e066eeccbc619ff44565526c6449890b11531c1884ae8a911d: Status 404 returned error can't find the container with id 3257d1e4df7d36e066eeccbc619ff44565526c6449890b11531c1884ae8a911d Feb 17 08:56:12 crc kubenswrapper[4813]: E0217 08:56:12.059869 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.9:5001/openstack-k8s-operators/watcher-operator:205feca93c544be6b9b4f78fb631537dc3a19ff8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zp79j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-56dcfd7757-sf2sv_openstack-operators(f7cbb633-768c-4ab3-9243-252a24046c73): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 08:56:12 crc kubenswrapper[4813]: E0217 08:56:12.061936 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" podUID="f7cbb633-768c-4ab3-9243-252a24046c73" Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.283967 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:12 crc kubenswrapper[4813]: E0217 08:56:12.284127 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 08:56:12 crc kubenswrapper[4813]: E0217 08:56:12.284191 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs podName:d9a0e392-aeea-4033-939e-52e42ebf3fa5 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:14.284174409 +0000 UTC m=+921.944935632 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs") pod "openstack-operator-controller-manager-6bf8b7b945-pqgpx" (UID: "d9a0e392-aeea-4033-939e-52e42ebf3fa5") : secret "metrics-server-cert" not found Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.284130 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:12 crc kubenswrapper[4813]: E0217 08:56:12.284246 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 08:56:12 crc kubenswrapper[4813]: E0217 08:56:12.284447 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs podName:d9a0e392-aeea-4033-939e-52e42ebf3fa5 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:14.284414846 +0000 UTC m=+921.945176069 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs") pod "openstack-operator-controller-manager-6bf8b7b945-pqgpx" (UID: "d9a0e392-aeea-4033-939e-52e42ebf3fa5") : secret "webhook-server-cert" not found Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.303503 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.387929 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-utilities\") pod \"f9a75c83-abd3-4a92-9ec5-9ed784340cb4\" (UID: \"f9a75c83-abd3-4a92-9ec5-9ed784340cb4\") " Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.388764 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-utilities" (OuterVolumeSpecName: "utilities") pod "f9a75c83-abd3-4a92-9ec5-9ed784340cb4" (UID: "f9a75c83-abd3-4a92-9ec5-9ed784340cb4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.388942 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-catalog-content\") pod \"f9a75c83-abd3-4a92-9ec5-9ed784340cb4\" (UID: \"f9a75c83-abd3-4a92-9ec5-9ed784340cb4\") " Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.394180 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-kube-api-access-qwp8f" (OuterVolumeSpecName: "kube-api-access-qwp8f") pod "f9a75c83-abd3-4a92-9ec5-9ed784340cb4" (UID: "f9a75c83-abd3-4a92-9ec5-9ed784340cb4"). InnerVolumeSpecName "kube-api-access-qwp8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.388979 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwp8f\" (UniqueName: \"kubernetes.io/projected/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-kube-api-access-qwp8f\") pod \"f9a75c83-abd3-4a92-9ec5-9ed784340cb4\" (UID: \"f9a75c83-abd3-4a92-9ec5-9ed784340cb4\") " Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.401187 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwp8f\" (UniqueName: \"kubernetes.io/projected/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-kube-api-access-qwp8f\") on node \"crc\" DevicePath \"\"" Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.401203 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.450164 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9a75c83-abd3-4a92-9ec5-9ed784340cb4" (UID: "f9a75c83-abd3-4a92-9ec5-9ed784340cb4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.501984 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9a75c83-abd3-4a92-9ec5-9ed784340cb4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.863260 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-dljh7" event={"ID":"cfd95ad0-3c25-4884-a5fa-d91d1f771c1e","Type":"ContainerStarted","Data":"834202c6da496818efe0fe1f4d3e519d95cd0fcc57da5be309ef2e422785c3a3"} Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.866824 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xlfjs" event={"ID":"c2c4389e-21f4-4c70-9bc5-2eb9b93ad2cf","Type":"ContainerStarted","Data":"d69e5555d4cb0316e8c6a0c18c98d54e7ce985759ea33aae541caac6494e3d04"} Feb 17 08:56:12 crc kubenswrapper[4813]: E0217 08:56:12.869263 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xlfjs" podUID="c2c4389e-21f4-4c70-9bc5-2eb9b93ad2cf" Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.872597 4813 generic.go:334] "Generic (PLEG): container finished" podID="f9a75c83-abd3-4a92-9ec5-9ed784340cb4" containerID="c3949c11ed2de037ad566ae19dc74553dcc4e1344c1e8d72778b289dd1b29370" exitCode=0 Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.872672 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rsxzt" event={"ID":"f9a75c83-abd3-4a92-9ec5-9ed784340cb4","Type":"ContainerDied","Data":"c3949c11ed2de037ad566ae19dc74553dcc4e1344c1e8d72778b289dd1b29370"} Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.872701 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rsxzt" event={"ID":"f9a75c83-abd3-4a92-9ec5-9ed784340cb4","Type":"ContainerDied","Data":"0e55b1f848f145bdfdb140632fa19a9bed34e77b1af283833811bfe90eba2abb"} Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.872710 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rsxzt" Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.872721 4813 scope.go:117] "RemoveContainer" containerID="c3949c11ed2de037ad566ae19dc74553dcc4e1344c1e8d72778b289dd1b29370" Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.875978 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-zzj6k" event={"ID":"6451fc3e-e020-442e-b5d2-7e1094379337","Type":"ContainerStarted","Data":"bfc1d46a2fe6d7ab0c529ed0c7172e5c0644096b563b82ef82aada9a535d0316"} Feb 17 08:56:12 crc kubenswrapper[4813]: E0217 08:56:12.888076 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:4d3b6d259005ea30eee9c134d5fdf3d67eaacad8568ed105a34674e510086816\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-zzj6k" podUID="6451fc3e-e020-442e-b5d2-7e1094379337" Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.909235 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" event={"ID":"f7cbb633-768c-4ab3-9243-252a24046c73","Type":"ContainerStarted","Data":"3257d1e4df7d36e066eeccbc619ff44565526c6449890b11531c1884ae8a911d"} Feb 17 08:56:12 crc kubenswrapper[4813]: E0217 08:56:12.922188 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/openstack-k8s-operators/watcher-operator:205feca93c544be6b9b4f78fb631537dc3a19ff8\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" podUID="f7cbb633-768c-4ab3-9243-252a24046c73" Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.923299 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-4sccr" event={"ID":"8e7a72dd-76e6-47b0-8d51-aad9504620c0","Type":"ContainerStarted","Data":"69d21d96b1b090a4468cc832a9646072bc782b24c45d058d1acdcf8cba357177"} Feb 17 08:56:12 crc kubenswrapper[4813]: E0217 08:56:12.925209 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-4sccr" podUID="8e7a72dd-76e6-47b0-8d51-aad9504620c0" Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.929242 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rsxzt"] Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.935804 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rsxzt"] Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.950979 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gwmhd" event={"ID":"1274f46d-7df1-478f-ad9f-4df095082c3a","Type":"ContainerStarted","Data":"6530d965b29dcc3fb1b78e28a9d228790051c0b82de17192d53f7e1c8772de58"} Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.954054 4813 scope.go:117] "RemoveContainer" containerID="bc9048b9d0167b5d64bb85743029721cc31133a5303f5aa939b9fde6f736433e" Feb 17 08:56:12 crc kubenswrapper[4813]: I0217 08:56:12.994659 4813 scope.go:117] "RemoveContainer" containerID="6b83e6ef0fb1cb0712eba5e9c3e0e5a748c8ca9e776221a92352f44a9a471a4d" Feb 17 08:56:13 crc kubenswrapper[4813]: I0217 08:56:13.026365 4813 scope.go:117] "RemoveContainer" containerID="c3949c11ed2de037ad566ae19dc74553dcc4e1344c1e8d72778b289dd1b29370" Feb 17 08:56:13 crc kubenswrapper[4813]: E0217 08:56:13.027502 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3949c11ed2de037ad566ae19dc74553dcc4e1344c1e8d72778b289dd1b29370\": container with ID starting with c3949c11ed2de037ad566ae19dc74553dcc4e1344c1e8d72778b289dd1b29370 not found: ID does not exist" containerID="c3949c11ed2de037ad566ae19dc74553dcc4e1344c1e8d72778b289dd1b29370" Feb 17 08:56:13 crc kubenswrapper[4813]: I0217 08:56:13.027545 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3949c11ed2de037ad566ae19dc74553dcc4e1344c1e8d72778b289dd1b29370"} err="failed to get container status \"c3949c11ed2de037ad566ae19dc74553dcc4e1344c1e8d72778b289dd1b29370\": rpc error: code = NotFound desc = could not find container \"c3949c11ed2de037ad566ae19dc74553dcc4e1344c1e8d72778b289dd1b29370\": container with ID starting with c3949c11ed2de037ad566ae19dc74553dcc4e1344c1e8d72778b289dd1b29370 not found: ID does not exist" Feb 17 08:56:13 crc kubenswrapper[4813]: I0217 08:56:13.027568 4813 scope.go:117] "RemoveContainer" containerID="bc9048b9d0167b5d64bb85743029721cc31133a5303f5aa939b9fde6f736433e" Feb 17 08:56:13 crc kubenswrapper[4813]: E0217 08:56:13.028057 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc9048b9d0167b5d64bb85743029721cc31133a5303f5aa939b9fde6f736433e\": container with ID starting with bc9048b9d0167b5d64bb85743029721cc31133a5303f5aa939b9fde6f736433e not found: ID does not exist" containerID="bc9048b9d0167b5d64bb85743029721cc31133a5303f5aa939b9fde6f736433e" Feb 17 08:56:13 crc kubenswrapper[4813]: I0217 08:56:13.028116 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc9048b9d0167b5d64bb85743029721cc31133a5303f5aa939b9fde6f736433e"} err="failed to get container status \"bc9048b9d0167b5d64bb85743029721cc31133a5303f5aa939b9fde6f736433e\": rpc error: code = NotFound desc = could not find container \"bc9048b9d0167b5d64bb85743029721cc31133a5303f5aa939b9fde6f736433e\": container with ID starting with bc9048b9d0167b5d64bb85743029721cc31133a5303f5aa939b9fde6f736433e not found: ID does not exist" Feb 17 08:56:13 crc kubenswrapper[4813]: I0217 08:56:13.028141 4813 scope.go:117] "RemoveContainer" containerID="6b83e6ef0fb1cb0712eba5e9c3e0e5a748c8ca9e776221a92352f44a9a471a4d" Feb 17 08:56:13 crc kubenswrapper[4813]: E0217 08:56:13.028558 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b83e6ef0fb1cb0712eba5e9c3e0e5a748c8ca9e776221a92352f44a9a471a4d\": container with ID starting with 6b83e6ef0fb1cb0712eba5e9c3e0e5a748c8ca9e776221a92352f44a9a471a4d not found: ID does not exist" containerID="6b83e6ef0fb1cb0712eba5e9c3e0e5a748c8ca9e776221a92352f44a9a471a4d" Feb 17 08:56:13 crc kubenswrapper[4813]: I0217 08:56:13.028599 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b83e6ef0fb1cb0712eba5e9c3e0e5a748c8ca9e776221a92352f44a9a471a4d"} err="failed to get container status \"6b83e6ef0fb1cb0712eba5e9c3e0e5a748c8ca9e776221a92352f44a9a471a4d\": rpc error: code = NotFound desc = could not find container \"6b83e6ef0fb1cb0712eba5e9c3e0e5a748c8ca9e776221a92352f44a9a471a4d\": container with ID starting with 6b83e6ef0fb1cb0712eba5e9c3e0e5a748c8ca9e776221a92352f44a9a471a4d not found: ID does not exist" Feb 17 08:56:13 crc kubenswrapper[4813]: I0217 08:56:13.123287 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a75c83-abd3-4a92-9ec5-9ed784340cb4" path="/var/lib/kubelet/pods/f9a75c83-abd3-4a92-9ec5-9ed784340cb4/volumes" Feb 17 08:56:13 crc kubenswrapper[4813]: I0217 08:56:13.617428 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert\") pod \"infra-operator-controller-manager-66d6b5f488-flpcz\" (UID: \"ff0f7626-5da3-4763-8ae6-714ede4a2445\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" Feb 17 08:56:13 crc kubenswrapper[4813]: E0217 08:56:13.617609 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 08:56:13 crc kubenswrapper[4813]: E0217 08:56:13.617805 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert podName:ff0f7626-5da3-4763-8ae6-714ede4a2445 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:17.61778296 +0000 UTC m=+925.278544173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert") pod "infra-operator-controller-manager-66d6b5f488-flpcz" (UID: "ff0f7626-5da3-4763-8ae6-714ede4a2445") : secret "infra-operator-webhook-server-cert" not found Feb 17 08:56:13 crc kubenswrapper[4813]: E0217 08:56:13.981740 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:4d3b6d259005ea30eee9c134d5fdf3d67eaacad8568ed105a34674e510086816\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-zzj6k" podUID="6451fc3e-e020-442e-b5d2-7e1094379337" Feb 17 08:56:13 crc kubenswrapper[4813]: E0217 08:56:13.981884 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-4sccr" podUID="8e7a72dd-76e6-47b0-8d51-aad9504620c0" Feb 17 08:56:13 crc kubenswrapper[4813]: E0217 08:56:13.982024 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xlfjs" podUID="c2c4389e-21f4-4c70-9bc5-2eb9b93ad2cf" Feb 17 08:56:13 crc kubenswrapper[4813]: E0217 08:56:13.982638 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/openstack-k8s-operators/watcher-operator:205feca93c544be6b9b4f78fb631537dc3a19ff8\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" podUID="f7cbb633-768c-4ab3-9243-252a24046c73" Feb 17 08:56:14 crc kubenswrapper[4813]: I0217 08:56:14.022854 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m\" (UID: \"c4ef02fa-778f-4072-b15c-a8e98631c083\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" Feb 17 08:56:14 crc kubenswrapper[4813]: E0217 08:56:14.023211 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 08:56:14 crc kubenswrapper[4813]: E0217 08:56:14.023302 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert podName:c4ef02fa-778f-4072-b15c-a8e98631c083 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:18.023279465 +0000 UTC m=+925.684040718 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" (UID: "c4ef02fa-778f-4072-b15c-a8e98631c083") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 08:56:14 crc kubenswrapper[4813]: I0217 08:56:14.327099 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:14 crc kubenswrapper[4813]: I0217 08:56:14.327251 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:14 crc kubenswrapper[4813]: E0217 08:56:14.327295 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 08:56:14 crc kubenswrapper[4813]: E0217 08:56:14.327379 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs podName:d9a0e392-aeea-4033-939e-52e42ebf3fa5 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:18.327359146 +0000 UTC m=+925.988120369 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs") pod "openstack-operator-controller-manager-6bf8b7b945-pqgpx" (UID: "d9a0e392-aeea-4033-939e-52e42ebf3fa5") : secret "metrics-server-cert" not found Feb 17 08:56:14 crc kubenswrapper[4813]: E0217 08:56:14.327899 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 08:56:14 crc kubenswrapper[4813]: E0217 08:56:14.327981 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs podName:d9a0e392-aeea-4033-939e-52e42ebf3fa5 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:18.327964803 +0000 UTC m=+925.988726056 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs") pod "openstack-operator-controller-manager-6bf8b7b945-pqgpx" (UID: "d9a0e392-aeea-4033-939e-52e42ebf3fa5") : secret "webhook-server-cert" not found Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.422557 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lf94m"] Feb 17 08:56:16 crc kubenswrapper[4813]: E0217 08:56:16.423375 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a75c83-abd3-4a92-9ec5-9ed784340cb4" containerName="registry-server" Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.423397 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a75c83-abd3-4a92-9ec5-9ed784340cb4" containerName="registry-server" Feb 17 08:56:16 crc kubenswrapper[4813]: E0217 08:56:16.423422 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a75c83-abd3-4a92-9ec5-9ed784340cb4" containerName="extract-content" Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.423438 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a75c83-abd3-4a92-9ec5-9ed784340cb4" containerName="extract-content" Feb 17 08:56:16 crc kubenswrapper[4813]: E0217 08:56:16.423455 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a75c83-abd3-4a92-9ec5-9ed784340cb4" containerName="extract-utilities" Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.423468 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a75c83-abd3-4a92-9ec5-9ed784340cb4" containerName="extract-utilities" Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.423781 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a75c83-abd3-4a92-9ec5-9ed784340cb4" containerName="registry-server" Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.426039 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.444245 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lf94m"] Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.562242 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbbs5\" (UniqueName: \"kubernetes.io/projected/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-kube-api-access-mbbs5\") pod \"redhat-marketplace-lf94m\" (UID: \"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1\") " pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.562305 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-utilities\") pod \"redhat-marketplace-lf94m\" (UID: \"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1\") " pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.562334 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-catalog-content\") pod \"redhat-marketplace-lf94m\" (UID: \"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1\") " pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.663419 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbbs5\" (UniqueName: \"kubernetes.io/projected/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-kube-api-access-mbbs5\") pod \"redhat-marketplace-lf94m\" (UID: \"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1\") " pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.663481 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-catalog-content\") pod \"redhat-marketplace-lf94m\" (UID: \"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1\") " pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.663498 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-utilities\") pod \"redhat-marketplace-lf94m\" (UID: \"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1\") " pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.663981 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-catalog-content\") pod \"redhat-marketplace-lf94m\" (UID: \"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1\") " pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.664009 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-utilities\") pod \"redhat-marketplace-lf94m\" (UID: \"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1\") " pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.691119 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbbs5\" (UniqueName: \"kubernetes.io/projected/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-kube-api-access-mbbs5\") pod \"redhat-marketplace-lf94m\" (UID: \"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1\") " pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:16 crc kubenswrapper[4813]: I0217 08:56:16.786177 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:17 crc kubenswrapper[4813]: I0217 08:56:17.678778 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert\") pod \"infra-operator-controller-manager-66d6b5f488-flpcz\" (UID: \"ff0f7626-5da3-4763-8ae6-714ede4a2445\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" Feb 17 08:56:17 crc kubenswrapper[4813]: E0217 08:56:17.678961 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 08:56:17 crc kubenswrapper[4813]: E0217 08:56:17.679035 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert podName:ff0f7626-5da3-4763-8ae6-714ede4a2445 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:25.679017238 +0000 UTC m=+933.339778461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert") pod "infra-operator-controller-manager-66d6b5f488-flpcz" (UID: "ff0f7626-5da3-4763-8ae6-714ede4a2445") : secret "infra-operator-webhook-server-cert" not found Feb 17 08:56:18 crc kubenswrapper[4813]: I0217 08:56:18.084503 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m\" (UID: \"c4ef02fa-778f-4072-b15c-a8e98631c083\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" Feb 17 08:56:18 crc kubenswrapper[4813]: E0217 08:56:18.084749 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 08:56:18 crc kubenswrapper[4813]: E0217 08:56:18.085045 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert podName:c4ef02fa-778f-4072-b15c-a8e98631c083 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:26.085012478 +0000 UTC m=+933.745773731 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" (UID: "c4ef02fa-778f-4072-b15c-a8e98631c083") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 08:56:18 crc kubenswrapper[4813]: I0217 08:56:18.389665 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:18 crc kubenswrapper[4813]: I0217 08:56:18.389750 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:18 crc kubenswrapper[4813]: E0217 08:56:18.389802 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 08:56:18 crc kubenswrapper[4813]: E0217 08:56:18.389852 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs podName:d9a0e392-aeea-4033-939e-52e42ebf3fa5 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:26.38983815 +0000 UTC m=+934.050599373 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs") pod "openstack-operator-controller-manager-6bf8b7b945-pqgpx" (UID: "d9a0e392-aeea-4033-939e-52e42ebf3fa5") : secret "webhook-server-cert" not found Feb 17 08:56:18 crc kubenswrapper[4813]: E0217 08:56:18.389997 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 08:56:18 crc kubenswrapper[4813]: E0217 08:56:18.390087 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs podName:d9a0e392-aeea-4033-939e-52e42ebf3fa5 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:26.390066027 +0000 UTC m=+934.050827260 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs") pod "openstack-operator-controller-manager-6bf8b7b945-pqgpx" (UID: "d9a0e392-aeea-4033-939e-52e42ebf3fa5") : secret "metrics-server-cert" not found Feb 17 08:56:25 crc kubenswrapper[4813]: E0217 08:56:25.421426 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:015f7f2d8b5afc85e51dd3b2e02a4cfb8294b543437315b291006d2416764db9" Feb 17 08:56:25 crc kubenswrapper[4813]: E0217 08:56:25.422126 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:015f7f2d8b5afc85e51dd3b2e02a4cfb8294b543437315b291006d2416764db9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pmvz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-79558bbfbf-w9mll_openstack-operators(d633c51f-1eea-4111-a46d-199e2f203c14): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 08:56:25 crc kubenswrapper[4813]: E0217 08:56:25.423326 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-w9mll" podUID="d633c51f-1eea-4111-a46d-199e2f203c14" Feb 17 08:56:25 crc kubenswrapper[4813]: I0217 08:56:25.704727 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert\") pod \"infra-operator-controller-manager-66d6b5f488-flpcz\" (UID: \"ff0f7626-5da3-4763-8ae6-714ede4a2445\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" Feb 17 08:56:25 crc kubenswrapper[4813]: E0217 08:56:25.704909 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 08:56:25 crc kubenswrapper[4813]: E0217 08:56:25.704962 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert podName:ff0f7626-5da3-4763-8ae6-714ede4a2445 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:41.704947329 +0000 UTC m=+949.365708542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert") pod "infra-operator-controller-manager-66d6b5f488-flpcz" (UID: "ff0f7626-5da3-4763-8ae6-714ede4a2445") : secret "infra-operator-webhook-server-cert" not found Feb 17 08:56:25 crc kubenswrapper[4813]: E0217 08:56:25.957850 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c" Feb 17 08:56:25 crc kubenswrapper[4813]: E0217 08:56:25.958040 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwfb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5ddd85db87-wbbfl_openstack-operators(28206bb6-553c-4ccd-bb15-8c42c7f34415): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 08:56:25 crc kubenswrapper[4813]: E0217 08:56:25.959233 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wbbfl" podUID="28206bb6-553c-4ccd-bb15-8c42c7f34415" Feb 17 08:56:26 crc kubenswrapper[4813]: E0217 08:56:26.072130 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:015f7f2d8b5afc85e51dd3b2e02a4cfb8294b543437315b291006d2416764db9\\\"\"" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-w9mll" podUID="d633c51f-1eea-4111-a46d-199e2f203c14" Feb 17 08:56:26 crc kubenswrapper[4813]: E0217 08:56:26.072257 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wbbfl" podUID="28206bb6-553c-4ccd-bb15-8c42c7f34415" Feb 17 08:56:26 crc kubenswrapper[4813]: I0217 08:56:26.113276 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m\" (UID: \"c4ef02fa-778f-4072-b15c-a8e98631c083\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" Feb 17 08:56:26 crc kubenswrapper[4813]: E0217 08:56:26.113418 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 08:56:26 crc kubenswrapper[4813]: E0217 08:56:26.113481 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert podName:c4ef02fa-778f-4072-b15c-a8e98631c083 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:42.113464951 +0000 UTC m=+949.774226264 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" (UID: "c4ef02fa-778f-4072-b15c-a8e98631c083") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 08:56:26 crc kubenswrapper[4813]: I0217 08:56:26.418177 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:26 crc kubenswrapper[4813]: I0217 08:56:26.418286 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:26 crc kubenswrapper[4813]: E0217 08:56:26.418513 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 08:56:26 crc kubenswrapper[4813]: E0217 08:56:26.418579 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs podName:d9a0e392-aeea-4033-939e-52e42ebf3fa5 nodeName:}" failed. No retries permitted until 2026-02-17 08:56:42.418557281 +0000 UTC m=+950.079318544 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs") pod "openstack-operator-controller-manager-6bf8b7b945-pqgpx" (UID: "d9a0e392-aeea-4033-939e-52e42ebf3fa5") : secret "metrics-server-cert" not found Feb 17 08:56:26 crc kubenswrapper[4813]: I0217 08:56:26.424300 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-webhook-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:26 crc kubenswrapper[4813]: E0217 08:56:26.523825 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469" Feb 17 08:56:26 crc kubenswrapper[4813]: E0217 08:56:26.525040 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hqfzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-6c78d668d5-k4qmx_openstack-operators(4bfccf88-ba10-4e4e-a6f8-d3d7a362990d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 08:56:26 crc kubenswrapper[4813]: E0217 08:56:26.526452 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-k4qmx" podUID="4bfccf88-ba10-4e4e-a6f8-d3d7a362990d" Feb 17 08:56:26 crc kubenswrapper[4813]: I0217 08:56:26.956116 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lf94m"] Feb 17 08:56:26 crc kubenswrapper[4813]: W0217 08:56:26.992597 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod862ea87f_d6ca_40f3_8e76_2e6e4542e1b1.slice/crio-8cbbbc479e587a392c74603570ff3e67849a2dc50487bfd89485940ac681c7f6 WatchSource:0}: Error finding container 8cbbbc479e587a392c74603570ff3e67849a2dc50487bfd89485940ac681c7f6: Status 404 returned error can't find the container with id 8cbbbc479e587a392c74603570ff3e67849a2dc50487bfd89485940ac681c7f6 Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.083349 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-h5hbm" event={"ID":"5c6a587a-9a0b-458f-aea4-445dbcfdaecc","Type":"ContainerStarted","Data":"2fbcf2d013ce7153f10b44a8d6673eb2257bf2748ae78ef59bd20a2e36206802"} Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.083712 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-h5hbm" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.089912 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lf94m" event={"ID":"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1","Type":"ContainerStarted","Data":"8cbbbc479e587a392c74603570ff3e67849a2dc50487bfd89485940ac681c7f6"} Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.102104 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-h5hbm" podStartSLOduration=3.316171749 podStartE2EDuration="18.102075156s" podCreationTimestamp="2026-02-17 08:56:09 +0000 UTC" firstStartedPulling="2026-02-17 08:56:11.721911523 +0000 UTC m=+919.382672746" lastFinishedPulling="2026-02-17 08:56:26.50781491 +0000 UTC m=+934.168576153" observedRunningTime="2026-02-17 08:56:27.100002027 +0000 UTC m=+934.760763250" watchObservedRunningTime="2026-02-17 08:56:27.102075156 +0000 UTC m=+934.762836379" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.109950 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-8ppzs" event={"ID":"0a961df2-a4a7-431d-a389-1cafd967a0bc","Type":"ContainerStarted","Data":"e9f9a074087bf4c3a0aff8a90663eac9d3e0844d3eac3c6ce38ae3eac5804263"} Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.110231 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-8ppzs" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.123481 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-9595d6797-c6plm" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.123518 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-c6plm" event={"ID":"daecd5b7-6576-4ddf-bb48-2131c26a9995","Type":"ContainerStarted","Data":"0cf4d6ed87e1d35514856252f7341dbe8e650ef82845ccfe93e12281b9858cb4"} Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.123538 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gwmhd" event={"ID":"1274f46d-7df1-478f-ad9f-4df095082c3a","Type":"ContainerStarted","Data":"da0a7c5e9b24a73f66a74aee0ff78063900c295537e825240fa39ea212e4e3d5"} Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.123549 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gwmhd" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.133005 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-8ppzs" podStartSLOduration=3.325238457 podStartE2EDuration="18.132987796s" podCreationTimestamp="2026-02-17 08:56:09 +0000 UTC" firstStartedPulling="2026-02-17 08:56:11.697872579 +0000 UTC m=+919.358633802" lastFinishedPulling="2026-02-17 08:56:26.505621918 +0000 UTC m=+934.166383141" observedRunningTime="2026-02-17 08:56:27.127913982 +0000 UTC m=+934.788675205" watchObservedRunningTime="2026-02-17 08:56:27.132987796 +0000 UTC m=+934.793749019" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.140760 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-7znk6" event={"ID":"d656660c-1dd3-4c91-9ef7-12248f1f388a","Type":"ContainerStarted","Data":"228ed585cf83502ff93d9da4fe2976e7a946021394d53a4ebcbab772d4995d67"} Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.140949 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-7znk6" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.144292 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-cktr5" event={"ID":"39e9a182-3baa-4d60-ac63-00d40443be7b","Type":"ContainerStarted","Data":"81c1739705515e4fe1d7d8308d5d8ca55eb17bef065cf2d5c4292467d036689e"} Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.144427 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-cktr5" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.145872 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-r79hm" event={"ID":"f8cae50b-944c-4dfd-8cae-5275b9290a07","Type":"ContainerStarted","Data":"6cfb2821d6ce9f28842d7fe02a53178d3f726477e67a8538c88809c9e2cbc3dc"} Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.146296 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-r79hm" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.154539 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-dljh7" event={"ID":"cfd95ad0-3c25-4884-a5fa-d91d1f771c1e","Type":"ContainerStarted","Data":"d4682c332511e7c6607768dc12b41865ede67583519beb4d6f22672a011e6c66"} Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.154766 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-dljh7" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.156457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-hz7df" event={"ID":"0fdf5f90-ddf7-4c01-ba25-037628a298fb","Type":"ContainerStarted","Data":"ea91563c7f805d7c7c82209659638a13ea6f8a531a52ef0f6b6b8fdb69ad7d55"} Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.157250 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-hz7df" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.167919 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-84kxx" event={"ID":"08d18b9c-b137-4735-9e80-95636feac4ed","Type":"ContainerStarted","Data":"baa157d4cd6356ddfef87d5863093f2ecf5b768d1d84c1409e8d50763023418f"} Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.168057 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-84kxx" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.171749 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-vxh2x" event={"ID":"7e6fd8d2-9aeb-432a-9c01-e22332432a28","Type":"ContainerStarted","Data":"827fad92f9064327e9c48e9df4382783ccd3592c72e8dc828556020ffc834c1d"} Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.171815 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-vxh2x" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.174673 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-9595d6797-c6plm" podStartSLOduration=2.927160392 podStartE2EDuration="18.174653281s" podCreationTimestamp="2026-02-17 08:56:09 +0000 UTC" firstStartedPulling="2026-02-17 08:56:11.259211239 +0000 UTC m=+918.919972462" lastFinishedPulling="2026-02-17 08:56:26.506704108 +0000 UTC m=+934.167465351" observedRunningTime="2026-02-17 08:56:27.166154999 +0000 UTC m=+934.826916222" watchObservedRunningTime="2026-02-17 08:56:27.174653281 +0000 UTC m=+934.835414584" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.187191 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-mkmzp" event={"ID":"b58443c8-72d6-42ba-a920-9c11a9bc6b6e","Type":"ContainerStarted","Data":"d60f7efea3e460be71ed76d90dc16592e641b54bcfd5dcfe8100aaa9950589cb"} Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.187868 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-mkmzp" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.188186 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gwmhd" podStartSLOduration=2.734514881 podStartE2EDuration="17.188164446s" podCreationTimestamp="2026-02-17 08:56:10 +0000 UTC" firstStartedPulling="2026-02-17 08:56:12.05399832 +0000 UTC m=+919.714759543" lastFinishedPulling="2026-02-17 08:56:26.507647895 +0000 UTC m=+934.168409108" observedRunningTime="2026-02-17 08:56:27.181715842 +0000 UTC m=+934.842477065" watchObservedRunningTime="2026-02-17 08:56:27.188164446 +0000 UTC m=+934.848925669" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.201154 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zhnzs" event={"ID":"0626f4b2-1593-4b46-972d-079f3fe29ce3","Type":"ContainerStarted","Data":"cf5486b6261cfa254024e986aff57cb828f60295e775ee0985e3307c32f38055"} Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.201191 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zhnzs" Feb 17 08:56:27 crc kubenswrapper[4813]: E0217 08:56:27.201760 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-k4qmx" podUID="4bfccf88-ba10-4e4e-a6f8-d3d7a362990d" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.225488 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-7znk6" podStartSLOduration=3.459686212 podStartE2EDuration="18.225474047s" podCreationTimestamp="2026-02-17 08:56:09 +0000 UTC" firstStartedPulling="2026-02-17 08:56:11.739597616 +0000 UTC m=+919.400358829" lastFinishedPulling="2026-02-17 08:56:26.505385441 +0000 UTC m=+934.166146664" observedRunningTime="2026-02-17 08:56:27.220208847 +0000 UTC m=+934.880970070" watchObservedRunningTime="2026-02-17 08:56:27.225474047 +0000 UTC m=+934.886235270" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.265563 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-dljh7" podStartSLOduration=2.582724052 podStartE2EDuration="17.265543287s" podCreationTimestamp="2026-02-17 08:56:10 +0000 UTC" firstStartedPulling="2026-02-17 08:56:11.821124115 +0000 UTC m=+919.481885338" lastFinishedPulling="2026-02-17 08:56:26.50394335 +0000 UTC m=+934.164704573" observedRunningTime="2026-02-17 08:56:27.263418567 +0000 UTC m=+934.924179800" watchObservedRunningTime="2026-02-17 08:56:27.265543287 +0000 UTC m=+934.926304510" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.344165 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-r79hm" podStartSLOduration=2.722652983 podStartE2EDuration="18.344144253s" podCreationTimestamp="2026-02-17 08:56:09 +0000 UTC" firstStartedPulling="2026-02-17 08:56:10.884023235 +0000 UTC m=+918.544784458" lastFinishedPulling="2026-02-17 08:56:26.505514495 +0000 UTC m=+934.166275728" observedRunningTime="2026-02-17 08:56:27.342497736 +0000 UTC m=+935.003258959" watchObservedRunningTime="2026-02-17 08:56:27.344144253 +0000 UTC m=+935.004905486" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.345941 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-vxh2x" podStartSLOduration=3.548306733 podStartE2EDuration="18.345929254s" podCreationTimestamp="2026-02-17 08:56:09 +0000 UTC" firstStartedPulling="2026-02-17 08:56:11.707787661 +0000 UTC m=+919.368548884" lastFinishedPulling="2026-02-17 08:56:26.505410182 +0000 UTC m=+934.166171405" observedRunningTime="2026-02-17 08:56:27.316561508 +0000 UTC m=+934.977322731" watchObservedRunningTime="2026-02-17 08:56:27.345929254 +0000 UTC m=+935.006690477" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.373654 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-84kxx" podStartSLOduration=3.624457369 podStartE2EDuration="18.373637572s" podCreationTimestamp="2026-02-17 08:56:09 +0000 UTC" firstStartedPulling="2026-02-17 08:56:11.74818218 +0000 UTC m=+919.408943413" lastFinishedPulling="2026-02-17 08:56:26.497362393 +0000 UTC m=+934.158123616" observedRunningTime="2026-02-17 08:56:27.366912181 +0000 UTC m=+935.027673414" watchObservedRunningTime="2026-02-17 08:56:27.373637572 +0000 UTC m=+935.034398795" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.410532 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-hz7df" podStartSLOduration=3.107705678 podStartE2EDuration="18.410513231s" podCreationTimestamp="2026-02-17 08:56:09 +0000 UTC" firstStartedPulling="2026-02-17 08:56:11.200292913 +0000 UTC m=+918.861054136" lastFinishedPulling="2026-02-17 08:56:26.503100476 +0000 UTC m=+934.163861689" observedRunningTime="2026-02-17 08:56:27.410328816 +0000 UTC m=+935.071090039" watchObservedRunningTime="2026-02-17 08:56:27.410513231 +0000 UTC m=+935.071274454" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.481149 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zhnzs" podStartSLOduration=3.210982356 podStartE2EDuration="18.48113525s" podCreationTimestamp="2026-02-17 08:56:09 +0000 UTC" firstStartedPulling="2026-02-17 08:56:11.239381495 +0000 UTC m=+918.900142718" lastFinishedPulling="2026-02-17 08:56:26.509534379 +0000 UTC m=+934.170295612" observedRunningTime="2026-02-17 08:56:27.47899238 +0000 UTC m=+935.139753603" watchObservedRunningTime="2026-02-17 08:56:27.48113525 +0000 UTC m=+935.141896473" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.482983 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-cktr5" podStartSLOduration=3.681132752 podStartE2EDuration="18.482977903s" podCreationTimestamp="2026-02-17 08:56:09 +0000 UTC" firstStartedPulling="2026-02-17 08:56:11.707027639 +0000 UTC m=+919.367788862" lastFinishedPulling="2026-02-17 08:56:26.50887279 +0000 UTC m=+934.169634013" observedRunningTime="2026-02-17 08:56:27.461263475 +0000 UTC m=+935.122024698" watchObservedRunningTime="2026-02-17 08:56:27.482977903 +0000 UTC m=+935.143739116" Feb 17 08:56:27 crc kubenswrapper[4813]: I0217 08:56:27.558410 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-mkmzp" podStartSLOduration=2.44553438 podStartE2EDuration="18.558392368s" podCreationTimestamp="2026-02-17 08:56:09 +0000 UTC" firstStartedPulling="2026-02-17 08:56:10.384534185 +0000 UTC m=+918.045295408" lastFinishedPulling="2026-02-17 08:56:26.497392173 +0000 UTC m=+934.158153396" observedRunningTime="2026-02-17 08:56:27.554114297 +0000 UTC m=+935.214875520" watchObservedRunningTime="2026-02-17 08:56:27.558392368 +0000 UTC m=+935.219153591" Feb 17 08:56:28 crc kubenswrapper[4813]: I0217 08:56:28.210337 4813 generic.go:334] "Generic (PLEG): container finished" podID="862ea87f-d6ca-40f3-8e76-2e6e4542e1b1" containerID="270045c2fef5893a613d458b36785a9a9716d73dde2a5370f3994eec72ba9a7c" exitCode=0 Feb 17 08:56:28 crc kubenswrapper[4813]: I0217 08:56:28.210444 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lf94m" event={"ID":"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1","Type":"ContainerDied","Data":"270045c2fef5893a613d458b36785a9a9716d73dde2a5370f3994eec72ba9a7c"} Feb 17 08:56:35 crc kubenswrapper[4813]: I0217 08:56:35.166087 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:56:35 crc kubenswrapper[4813]: I0217 08:56:35.166749 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:56:35 crc kubenswrapper[4813]: I0217 08:56:35.166813 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 08:56:35 crc kubenswrapper[4813]: I0217 08:56:35.168280 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7212b4b532e53d852c5e6fbe5aa59b96599c01899ec81be229b35b10904557df"} pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 08:56:35 crc kubenswrapper[4813]: I0217 08:56:35.168446 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" containerID="cri-o://7212b4b532e53d852c5e6fbe5aa59b96599c01899ec81be229b35b10904557df" gracePeriod=600 Feb 17 08:56:36 crc kubenswrapper[4813]: I0217 08:56:36.291846 4813 generic.go:334] "Generic (PLEG): container finished" podID="3a6ba827-b08b-4163-b067-d9adb119398d" containerID="7212b4b532e53d852c5e6fbe5aa59b96599c01899ec81be229b35b10904557df" exitCode=0 Feb 17 08:56:36 crc kubenswrapper[4813]: I0217 08:56:36.291923 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerDied","Data":"7212b4b532e53d852c5e6fbe5aa59b96599c01899ec81be229b35b10904557df"} Feb 17 08:56:36 crc kubenswrapper[4813]: I0217 08:56:36.292147 4813 scope.go:117] "RemoveContainer" containerID="7375dc71231db7ccf7ec9a93ed4b7c58981e373ed44891ec0dfde219ffc963ad" Feb 17 08:56:38 crc kubenswrapper[4813]: I0217 08:56:38.320573 4813 generic.go:334] "Generic (PLEG): container finished" podID="862ea87f-d6ca-40f3-8e76-2e6e4542e1b1" containerID="95890bc6ca126e19788586983a8f01b99dd21ad0e7ffcb81f76de130aef710a8" exitCode=0 Feb 17 08:56:38 crc kubenswrapper[4813]: I0217 08:56:38.320647 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lf94m" event={"ID":"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1","Type":"ContainerDied","Data":"95890bc6ca126e19788586983a8f01b99dd21ad0e7ffcb81f76de130aef710a8"} Feb 17 08:56:38 crc kubenswrapper[4813]: I0217 08:56:38.325494 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xlfjs" event={"ID":"c2c4389e-21f4-4c70-9bc5-2eb9b93ad2cf","Type":"ContainerStarted","Data":"af5cf61cc97254edab7d69279f0a86e548e2dadcdee3000bee58adaf0252e15b"} Feb 17 08:56:38 crc kubenswrapper[4813]: I0217 08:56:38.327854 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-zzj6k" event={"ID":"6451fc3e-e020-442e-b5d2-7e1094379337","Type":"ContainerStarted","Data":"daf31f106773aad2892ddbabe7381016857d7edcc048fd431c139d23a78b8ac8"} Feb 17 08:56:38 crc kubenswrapper[4813]: I0217 08:56:38.328141 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-zzj6k" Feb 17 08:56:38 crc kubenswrapper[4813]: I0217 08:56:38.330143 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" event={"ID":"f7cbb633-768c-4ab3-9243-252a24046c73","Type":"ContainerStarted","Data":"5c4ac8d2824c51eedb8cac6a31f40bbfec221fff448dc2379e4e3419e97a0404"} Feb 17 08:56:38 crc kubenswrapper[4813]: I0217 08:56:38.330574 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" Feb 17 08:56:38 crc kubenswrapper[4813]: I0217 08:56:38.333610 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerStarted","Data":"e4284fb46e7e232957891008abfaed8819255b12cf7fd236cbf3602b7e1a318a"} Feb 17 08:56:38 crc kubenswrapper[4813]: I0217 08:56:38.336209 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-4sccr" event={"ID":"8e7a72dd-76e6-47b0-8d51-aad9504620c0","Type":"ContainerStarted","Data":"41fc87525e9bea392195c5f2e38262ac2c0b71df6e0e36007c72a58ebee99d22"} Feb 17 08:56:38 crc kubenswrapper[4813]: I0217 08:56:38.336666 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-4sccr" Feb 17 08:56:38 crc kubenswrapper[4813]: I0217 08:56:38.379013 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-zzj6k" podStartSLOduration=2.978151272 podStartE2EDuration="28.378995336s" podCreationTimestamp="2026-02-17 08:56:10 +0000 UTC" firstStartedPulling="2026-02-17 08:56:11.866673091 +0000 UTC m=+919.527434314" lastFinishedPulling="2026-02-17 08:56:37.267517145 +0000 UTC m=+944.928278378" observedRunningTime="2026-02-17 08:56:38.378110071 +0000 UTC m=+946.038871334" watchObservedRunningTime="2026-02-17 08:56:38.378995336 +0000 UTC m=+946.039756559" Feb 17 08:56:38 crc kubenswrapper[4813]: I0217 08:56:38.407525 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" podStartSLOduration=3.098870027 podStartE2EDuration="28.407505958s" podCreationTimestamp="2026-02-17 08:56:10 +0000 UTC" firstStartedPulling="2026-02-17 08:56:12.059756054 +0000 UTC m=+919.720517277" lastFinishedPulling="2026-02-17 08:56:37.368391985 +0000 UTC m=+945.029153208" observedRunningTime="2026-02-17 08:56:38.402218037 +0000 UTC m=+946.062979260" watchObservedRunningTime="2026-02-17 08:56:38.407505958 +0000 UTC m=+946.068267201" Feb 17 08:56:38 crc kubenswrapper[4813]: I0217 08:56:38.427671 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-4sccr" podStartSLOduration=2.957254567 podStartE2EDuration="28.42764353s" podCreationTimestamp="2026-02-17 08:56:10 +0000 UTC" firstStartedPulling="2026-02-17 08:56:11.853166117 +0000 UTC m=+919.513927340" lastFinishedPulling="2026-02-17 08:56:37.32355507 +0000 UTC m=+944.984316303" observedRunningTime="2026-02-17 08:56:38.417301456 +0000 UTC m=+946.078062689" watchObservedRunningTime="2026-02-17 08:56:38.42764353 +0000 UTC m=+946.088404793" Feb 17 08:56:38 crc kubenswrapper[4813]: I0217 08:56:38.462057 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xlfjs" podStartSLOduration=2.985629535 podStartE2EDuration="28.462039839s" podCreationTimestamp="2026-02-17 08:56:10 +0000 UTC" firstStartedPulling="2026-02-17 08:56:11.829751361 +0000 UTC m=+919.490512584" lastFinishedPulling="2026-02-17 08:56:37.306161645 +0000 UTC m=+944.966922888" observedRunningTime="2026-02-17 08:56:38.456826811 +0000 UTC m=+946.117588074" watchObservedRunningTime="2026-02-17 08:56:38.462039839 +0000 UTC m=+946.122801062" Feb 17 08:56:39 crc kubenswrapper[4813]: I0217 08:56:39.349720 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lf94m" event={"ID":"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1","Type":"ContainerStarted","Data":"585294dabcada4c62b9af9a8f7cfd49b2d41d7b1686738d1841c299427843214"} Feb 17 08:56:39 crc kubenswrapper[4813]: I0217 08:56:39.375422 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lf94m" podStartSLOduration=12.855430907 podStartE2EDuration="23.375402723s" podCreationTimestamp="2026-02-17 08:56:16 +0000 UTC" firstStartedPulling="2026-02-17 08:56:28.215511222 +0000 UTC m=+935.876272445" lastFinishedPulling="2026-02-17 08:56:38.735483028 +0000 UTC m=+946.396244261" observedRunningTime="2026-02-17 08:56:39.36932206 +0000 UTC m=+947.030083283" watchObservedRunningTime="2026-02-17 08:56:39.375402723 +0000 UTC m=+947.036163956" Feb 17 08:56:39 crc kubenswrapper[4813]: I0217 08:56:39.998103 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-hz7df" Feb 17 08:56:40 crc kubenswrapper[4813]: I0217 08:56:40.015381 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-mkmzp" Feb 17 08:56:40 crc kubenswrapper[4813]: I0217 08:56:40.024203 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-r79hm" Feb 17 08:56:40 crc kubenswrapper[4813]: I0217 08:56:40.061877 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-zhnzs" Feb 17 08:56:40 crc kubenswrapper[4813]: I0217 08:56:40.105569 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-9595d6797-c6plm" Feb 17 08:56:40 crc kubenswrapper[4813]: I0217 08:56:40.121461 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-vxh2x" Feb 17 08:56:40 crc kubenswrapper[4813]: I0217 08:56:40.149715 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-h5hbm" Feb 17 08:56:40 crc kubenswrapper[4813]: I0217 08:56:40.188400 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-cktr5" Feb 17 08:56:40 crc kubenswrapper[4813]: I0217 08:56:40.196478 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-8ppzs" Feb 17 08:56:40 crc kubenswrapper[4813]: I0217 08:56:40.364215 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-7znk6" Feb 17 08:56:40 crc kubenswrapper[4813]: I0217 08:56:40.509072 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-84kxx" Feb 17 08:56:40 crc kubenswrapper[4813]: I0217 08:56:40.647161 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-dljh7" Feb 17 08:56:40 crc kubenswrapper[4813]: I0217 08:56:40.763140 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-gwmhd" Feb 17 08:56:41 crc kubenswrapper[4813]: I0217 08:56:41.368938 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wbbfl" event={"ID":"28206bb6-553c-4ccd-bb15-8c42c7f34415","Type":"ContainerStarted","Data":"b7979a91b4e8d8ea653f886fcccd5823d6c0e584d92ce60f91746632e049b57b"} Feb 17 08:56:41 crc kubenswrapper[4813]: I0217 08:56:41.369219 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wbbfl" Feb 17 08:56:41 crc kubenswrapper[4813]: I0217 08:56:41.399935 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wbbfl" podStartSLOduration=3.622358321 podStartE2EDuration="32.399910509s" podCreationTimestamp="2026-02-17 08:56:09 +0000 UTC" firstStartedPulling="2026-02-17 08:56:11.739442302 +0000 UTC m=+919.400203525" lastFinishedPulling="2026-02-17 08:56:40.51699449 +0000 UTC m=+948.177755713" observedRunningTime="2026-02-17 08:56:41.389162983 +0000 UTC m=+949.049924246" watchObservedRunningTime="2026-02-17 08:56:41.399910509 +0000 UTC m=+949.060671762" Feb 17 08:56:41 crc kubenswrapper[4813]: I0217 08:56:41.776796 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert\") pod \"infra-operator-controller-manager-66d6b5f488-flpcz\" (UID: \"ff0f7626-5da3-4763-8ae6-714ede4a2445\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" Feb 17 08:56:41 crc kubenswrapper[4813]: I0217 08:56:41.790096 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ff0f7626-5da3-4763-8ae6-714ede4a2445-cert\") pod \"infra-operator-controller-manager-66d6b5f488-flpcz\" (UID: \"ff0f7626-5da3-4763-8ae6-714ede4a2445\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" Feb 17 08:56:41 crc kubenswrapper[4813]: I0217 08:56:41.929278 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" Feb 17 08:56:42 crc kubenswrapper[4813]: I0217 08:56:42.183036 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m\" (UID: \"c4ef02fa-778f-4072-b15c-a8e98631c083\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" Feb 17 08:56:42 crc kubenswrapper[4813]: I0217 08:56:42.189051 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4ef02fa-778f-4072-b15c-a8e98631c083-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m\" (UID: \"c4ef02fa-778f-4072-b15c-a8e98631c083\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" Feb 17 08:56:42 crc kubenswrapper[4813]: I0217 08:56:42.288531 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz"] Feb 17 08:56:42 crc kubenswrapper[4813]: W0217 08:56:42.308828 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff0f7626_5da3_4763_8ae6_714ede4a2445.slice/crio-67c01d82c12597241ad824f4fd5617632ec070021bd0f720a12e321c8b12cb55 WatchSource:0}: Error finding container 67c01d82c12597241ad824f4fd5617632ec070021bd0f720a12e321c8b12cb55: Status 404 returned error can't find the container with id 67c01d82c12597241ad824f4fd5617632ec070021bd0f720a12e321c8b12cb55 Feb 17 08:56:42 crc kubenswrapper[4813]: I0217 08:56:42.321913 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" Feb 17 08:56:42 crc kubenswrapper[4813]: I0217 08:56:42.387043 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-w9mll" event={"ID":"d633c51f-1eea-4111-a46d-199e2f203c14","Type":"ContainerStarted","Data":"7648483d4d6e4fb02e855b49dab36f2f81c09d3471c6f0317d66c717416a0628"} Feb 17 08:56:42 crc kubenswrapper[4813]: I0217 08:56:42.387264 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-w9mll" Feb 17 08:56:42 crc kubenswrapper[4813]: I0217 08:56:42.394727 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" event={"ID":"ff0f7626-5da3-4763-8ae6-714ede4a2445","Type":"ContainerStarted","Data":"67c01d82c12597241ad824f4fd5617632ec070021bd0f720a12e321c8b12cb55"} Feb 17 08:56:42 crc kubenswrapper[4813]: I0217 08:56:42.404864 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-w9mll" podStartSLOduration=2.6407975649999997 podStartE2EDuration="32.404847499s" podCreationTimestamp="2026-02-17 08:56:10 +0000 UTC" firstStartedPulling="2026-02-17 08:56:11.772710088 +0000 UTC m=+919.433471311" lastFinishedPulling="2026-02-17 08:56:41.536759992 +0000 UTC m=+949.197521245" observedRunningTime="2026-02-17 08:56:42.399341322 +0000 UTC m=+950.060102555" watchObservedRunningTime="2026-02-17 08:56:42.404847499 +0000 UTC m=+950.065608722" Feb 17 08:56:42 crc kubenswrapper[4813]: I0217 08:56:42.486953 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:42 crc kubenswrapper[4813]: I0217 08:56:42.500867 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9a0e392-aeea-4033-939e-52e42ebf3fa5-metrics-certs\") pod \"openstack-operator-controller-manager-6bf8b7b945-pqgpx\" (UID: \"d9a0e392-aeea-4033-939e-52e42ebf3fa5\") " pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:42 crc kubenswrapper[4813]: I0217 08:56:42.744643 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:42 crc kubenswrapper[4813]: I0217 08:56:42.807783 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m"] Feb 17 08:56:43 crc kubenswrapper[4813]: I0217 08:56:43.233016 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx"] Feb 17 08:56:43 crc kubenswrapper[4813]: W0217 08:56:43.248381 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9a0e392_aeea_4033_939e_52e42ebf3fa5.slice/crio-beb8598698ac1bcd442bfd365410e5d89b23c0e4dae312dd621315660bff4076 WatchSource:0}: Error finding container beb8598698ac1bcd442bfd365410e5d89b23c0e4dae312dd621315660bff4076: Status 404 returned error can't find the container with id beb8598698ac1bcd442bfd365410e5d89b23c0e4dae312dd621315660bff4076 Feb 17 08:56:43 crc kubenswrapper[4813]: I0217 08:56:43.403914 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-k4qmx" event={"ID":"4bfccf88-ba10-4e4e-a6f8-d3d7a362990d","Type":"ContainerStarted","Data":"69d16773dcd808d50bc0bd475e52ac94ca38a6a2cab33fdc02bc739b83f212a3"} Feb 17 08:56:43 crc kubenswrapper[4813]: I0217 08:56:43.404180 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-k4qmx" Feb 17 08:56:43 crc kubenswrapper[4813]: I0217 08:56:43.407800 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" event={"ID":"d9a0e392-aeea-4033-939e-52e42ebf3fa5","Type":"ContainerStarted","Data":"2765295451a8c54f8015dfd2dc64ab55eb074ac2f57b5cedb4dac3ebf1e0c3e4"} Feb 17 08:56:43 crc kubenswrapper[4813]: I0217 08:56:43.407837 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" event={"ID":"d9a0e392-aeea-4033-939e-52e42ebf3fa5","Type":"ContainerStarted","Data":"beb8598698ac1bcd442bfd365410e5d89b23c0e4dae312dd621315660bff4076"} Feb 17 08:56:43 crc kubenswrapper[4813]: I0217 08:56:43.407909 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:43 crc kubenswrapper[4813]: I0217 08:56:43.410071 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" event={"ID":"c4ef02fa-778f-4072-b15c-a8e98631c083","Type":"ContainerStarted","Data":"36da7a92cd5b613d03efdba646a4770d5ab6900b18d0f91044c07cbec05016c6"} Feb 17 08:56:43 crc kubenswrapper[4813]: I0217 08:56:43.421469 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-k4qmx" podStartSLOduration=3.565112972 podStartE2EDuration="34.42145134s" podCreationTimestamp="2026-02-17 08:56:09 +0000 UTC" firstStartedPulling="2026-02-17 08:56:11.690775877 +0000 UTC m=+919.351537100" lastFinishedPulling="2026-02-17 08:56:42.547114245 +0000 UTC m=+950.207875468" observedRunningTime="2026-02-17 08:56:43.418817635 +0000 UTC m=+951.079578868" watchObservedRunningTime="2026-02-17 08:56:43.42145134 +0000 UTC m=+951.082212563" Feb 17 08:56:43 crc kubenswrapper[4813]: I0217 08:56:43.448956 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" podStartSLOduration=33.448941192 podStartE2EDuration="33.448941192s" podCreationTimestamp="2026-02-17 08:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:56:43.444344441 +0000 UTC m=+951.105105674" watchObservedRunningTime="2026-02-17 08:56:43.448941192 +0000 UTC m=+951.109702415" Feb 17 08:56:45 crc kubenswrapper[4813]: I0217 08:56:45.430498 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" event={"ID":"c4ef02fa-778f-4072-b15c-a8e98631c083","Type":"ContainerStarted","Data":"090ab18c399493a7632644e791d301e180e7fd36767e481206444032dbd9da80"} Feb 17 08:56:45 crc kubenswrapper[4813]: I0217 08:56:45.430891 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" Feb 17 08:56:45 crc kubenswrapper[4813]: I0217 08:56:45.433259 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" event={"ID":"ff0f7626-5da3-4763-8ae6-714ede4a2445","Type":"ContainerStarted","Data":"8933ee5eb05bb1bec3d980381a435c7b50babc24eb3611db886cfba39842b461"} Feb 17 08:56:45 crc kubenswrapper[4813]: I0217 08:56:45.433561 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" Feb 17 08:56:45 crc kubenswrapper[4813]: I0217 08:56:45.464975 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" podStartSLOduration=34.249036686 podStartE2EDuration="36.464948427s" podCreationTimestamp="2026-02-17 08:56:09 +0000 UTC" firstStartedPulling="2026-02-17 08:56:42.820486413 +0000 UTC m=+950.481247636" lastFinishedPulling="2026-02-17 08:56:45.036398154 +0000 UTC m=+952.697159377" observedRunningTime="2026-02-17 08:56:45.459401859 +0000 UTC m=+953.120163122" watchObservedRunningTime="2026-02-17 08:56:45.464948427 +0000 UTC m=+953.125709660" Feb 17 08:56:45 crc kubenswrapper[4813]: I0217 08:56:45.488210 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" podStartSLOduration=33.789164893 podStartE2EDuration="36.488189188s" podCreationTimestamp="2026-02-17 08:56:09 +0000 UTC" firstStartedPulling="2026-02-17 08:56:42.311758741 +0000 UTC m=+949.972519974" lastFinishedPulling="2026-02-17 08:56:45.010783046 +0000 UTC m=+952.671544269" observedRunningTime="2026-02-17 08:56:45.483485584 +0000 UTC m=+953.144246827" watchObservedRunningTime="2026-02-17 08:56:45.488189188 +0000 UTC m=+953.148950421" Feb 17 08:56:46 crc kubenswrapper[4813]: I0217 08:56:46.787755 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:46 crc kubenswrapper[4813]: I0217 08:56:46.788754 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:46 crc kubenswrapper[4813]: I0217 08:56:46.852555 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:47 crc kubenswrapper[4813]: I0217 08:56:47.504102 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:47 crc kubenswrapper[4813]: I0217 08:56:47.616764 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lf94m"] Feb 17 08:56:49 crc kubenswrapper[4813]: I0217 08:56:49.464332 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lf94m" podUID="862ea87f-d6ca-40f3-8e76-2e6e4542e1b1" containerName="registry-server" containerID="cri-o://585294dabcada4c62b9af9a8f7cfd49b2d41d7b1686738d1841c299427843214" gracePeriod=2 Feb 17 08:56:50 crc kubenswrapper[4813]: I0217 08:56:50.161982 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-k4qmx" Feb 17 08:56:50 crc kubenswrapper[4813]: I0217 08:56:50.451083 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-wbbfl" Feb 17 08:56:50 crc kubenswrapper[4813]: I0217 08:56:50.476273 4813 generic.go:334] "Generic (PLEG): container finished" podID="862ea87f-d6ca-40f3-8e76-2e6e4542e1b1" containerID="585294dabcada4c62b9af9a8f7cfd49b2d41d7b1686738d1841c299427843214" exitCode=0 Feb 17 08:56:50 crc kubenswrapper[4813]: I0217 08:56:50.476341 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lf94m" event={"ID":"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1","Type":"ContainerDied","Data":"585294dabcada4c62b9af9a8f7cfd49b2d41d7b1686738d1841c299427843214"} Feb 17 08:56:50 crc kubenswrapper[4813]: I0217 08:56:50.546693 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-zzj6k" Feb 17 08:56:50 crc kubenswrapper[4813]: I0217 08:56:50.587411 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-w9mll" Feb 17 08:56:50 crc kubenswrapper[4813]: I0217 08:56:50.649467 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-4sccr" Feb 17 08:56:50 crc kubenswrapper[4813]: I0217 08:56:50.790048 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" Feb 17 08:56:50 crc kubenswrapper[4813]: I0217 08:56:50.893296 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.042715 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-catalog-content\") pod \"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1\" (UID: \"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1\") " Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.042776 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-utilities\") pod \"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1\" (UID: \"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1\") " Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.042830 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbbs5\" (UniqueName: \"kubernetes.io/projected/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-kube-api-access-mbbs5\") pod \"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1\" (UID: \"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1\") " Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.044394 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-utilities" (OuterVolumeSpecName: "utilities") pod "862ea87f-d6ca-40f3-8e76-2e6e4542e1b1" (UID: "862ea87f-d6ca-40f3-8e76-2e6e4542e1b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.051975 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-kube-api-access-mbbs5" (OuterVolumeSpecName: "kube-api-access-mbbs5") pod "862ea87f-d6ca-40f3-8e76-2e6e4542e1b1" (UID: "862ea87f-d6ca-40f3-8e76-2e6e4542e1b1"). InnerVolumeSpecName "kube-api-access-mbbs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.071386 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "862ea87f-d6ca-40f3-8e76-2e6e4542e1b1" (UID: "862ea87f-d6ca-40f3-8e76-2e6e4542e1b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.144499 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.144534 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.144547 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbbs5\" (UniqueName: \"kubernetes.io/projected/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1-kube-api-access-mbbs5\") on node \"crc\" DevicePath \"\"" Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.491445 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lf94m" Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.492250 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lf94m" event={"ID":"862ea87f-d6ca-40f3-8e76-2e6e4542e1b1","Type":"ContainerDied","Data":"8cbbbc479e587a392c74603570ff3e67849a2dc50487bfd89485940ac681c7f6"} Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.492334 4813 scope.go:117] "RemoveContainer" containerID="585294dabcada4c62b9af9a8f7cfd49b2d41d7b1686738d1841c299427843214" Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.525005 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lf94m"] Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.529489 4813 scope.go:117] "RemoveContainer" containerID="95890bc6ca126e19788586983a8f01b99dd21ad0e7ffcb81f76de130aef710a8" Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.531663 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lf94m"] Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.557381 4813 scope.go:117] "RemoveContainer" containerID="270045c2fef5893a613d458b36785a9a9716d73dde2a5370f3994eec72ba9a7c" Feb 17 08:56:51 crc kubenswrapper[4813]: I0217 08:56:51.940084 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-flpcz" Feb 17 08:56:52 crc kubenswrapper[4813]: I0217 08:56:52.333061 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m" Feb 17 08:56:52 crc kubenswrapper[4813]: I0217 08:56:52.757436 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6bf8b7b945-pqgpx" Feb 17 08:56:53 crc kubenswrapper[4813]: I0217 08:56:53.128268 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="862ea87f-d6ca-40f3-8e76-2e6e4542e1b1" path="/var/lib/kubelet/pods/862ea87f-d6ca-40f3-8e76-2e6e4542e1b1/volumes" Feb 17 08:56:57 crc kubenswrapper[4813]: I0217 08:56:57.346876 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv"] Feb 17 08:56:57 crc kubenswrapper[4813]: I0217 08:56:57.347780 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" podUID="f7cbb633-768c-4ab3-9243-252a24046c73" containerName="manager" containerID="cri-o://5c4ac8d2824c51eedb8cac6a31f40bbfec221fff448dc2379e4e3419e97a0404" gracePeriod=10 Feb 17 08:56:57 crc kubenswrapper[4813]: I0217 08:56:57.374621 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6"] Feb 17 08:56:57 crc kubenswrapper[4813]: I0217 08:56:57.374861 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6" podUID="572cdf9b-6953-4201-961f-5f2404993f44" containerName="operator" containerID="cri-o://a2b4edf9e90fb2c76b88488f00a4fdfa7db49a57926772c06f06a729267ccbf9" gracePeriod=10 Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.422858 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6" Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.424974 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.556543 4813 generic.go:334] "Generic (PLEG): container finished" podID="f7cbb633-768c-4ab3-9243-252a24046c73" containerID="5c4ac8d2824c51eedb8cac6a31f40bbfec221fff448dc2379e4e3419e97a0404" exitCode=0 Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.556731 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" event={"ID":"f7cbb633-768c-4ab3-9243-252a24046c73","Type":"ContainerDied","Data":"5c4ac8d2824c51eedb8cac6a31f40bbfec221fff448dc2379e4e3419e97a0404"} Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.556771 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" event={"ID":"f7cbb633-768c-4ab3-9243-252a24046c73","Type":"ContainerDied","Data":"3257d1e4df7d36e066eeccbc619ff44565526c6449890b11531c1884ae8a911d"} Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.556797 4813 scope.go:117] "RemoveContainer" containerID="5c4ac8d2824c51eedb8cac6a31f40bbfec221fff448dc2379e4e3419e97a0404" Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.556983 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv" Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.559501 4813 generic.go:334] "Generic (PLEG): container finished" podID="572cdf9b-6953-4201-961f-5f2404993f44" containerID="a2b4edf9e90fb2c76b88488f00a4fdfa7db49a57926772c06f06a729267ccbf9" exitCode=0 Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.559559 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6" Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.559580 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6" event={"ID":"572cdf9b-6953-4201-961f-5f2404993f44","Type":"ContainerDied","Data":"a2b4edf9e90fb2c76b88488f00a4fdfa7db49a57926772c06f06a729267ccbf9"} Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.559970 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6" event={"ID":"572cdf9b-6953-4201-961f-5f2404993f44","Type":"ContainerDied","Data":"50eb9fd709c2243b0489d2fcb5e00c68cf121c8ae4e9fad4bfc8b057dd936ab6"} Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.565742 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrdrk\" (UniqueName: \"kubernetes.io/projected/572cdf9b-6953-4201-961f-5f2404993f44-kube-api-access-mrdrk\") pod \"572cdf9b-6953-4201-961f-5f2404993f44\" (UID: \"572cdf9b-6953-4201-961f-5f2404993f44\") " Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.565853 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp79j\" (UniqueName: \"kubernetes.io/projected/f7cbb633-768c-4ab3-9243-252a24046c73-kube-api-access-zp79j\") pod \"f7cbb633-768c-4ab3-9243-252a24046c73\" (UID: \"f7cbb633-768c-4ab3-9243-252a24046c73\") " Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.571740 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/572cdf9b-6953-4201-961f-5f2404993f44-kube-api-access-mrdrk" (OuterVolumeSpecName: "kube-api-access-mrdrk") pod "572cdf9b-6953-4201-961f-5f2404993f44" (UID: "572cdf9b-6953-4201-961f-5f2404993f44"). InnerVolumeSpecName "kube-api-access-mrdrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.571791 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7cbb633-768c-4ab3-9243-252a24046c73-kube-api-access-zp79j" (OuterVolumeSpecName: "kube-api-access-zp79j") pod "f7cbb633-768c-4ab3-9243-252a24046c73" (UID: "f7cbb633-768c-4ab3-9243-252a24046c73"). InnerVolumeSpecName "kube-api-access-zp79j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.589443 4813 scope.go:117] "RemoveContainer" containerID="5c4ac8d2824c51eedb8cac6a31f40bbfec221fff448dc2379e4e3419e97a0404" Feb 17 08:56:58 crc kubenswrapper[4813]: E0217 08:56:58.589945 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c4ac8d2824c51eedb8cac6a31f40bbfec221fff448dc2379e4e3419e97a0404\": container with ID starting with 5c4ac8d2824c51eedb8cac6a31f40bbfec221fff448dc2379e4e3419e97a0404 not found: ID does not exist" containerID="5c4ac8d2824c51eedb8cac6a31f40bbfec221fff448dc2379e4e3419e97a0404" Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.589999 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c4ac8d2824c51eedb8cac6a31f40bbfec221fff448dc2379e4e3419e97a0404"} err="failed to get container status \"5c4ac8d2824c51eedb8cac6a31f40bbfec221fff448dc2379e4e3419e97a0404\": rpc error: code = NotFound desc = could not find container \"5c4ac8d2824c51eedb8cac6a31f40bbfec221fff448dc2379e4e3419e97a0404\": container with ID starting with 5c4ac8d2824c51eedb8cac6a31f40bbfec221fff448dc2379e4e3419e97a0404 not found: ID does not exist" Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.590031 4813 scope.go:117] "RemoveContainer" containerID="a2b4edf9e90fb2c76b88488f00a4fdfa7db49a57926772c06f06a729267ccbf9" Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.614463 4813 scope.go:117] "RemoveContainer" containerID="a2b4edf9e90fb2c76b88488f00a4fdfa7db49a57926772c06f06a729267ccbf9" Feb 17 08:56:58 crc kubenswrapper[4813]: E0217 08:56:58.614994 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2b4edf9e90fb2c76b88488f00a4fdfa7db49a57926772c06f06a729267ccbf9\": container with ID starting with a2b4edf9e90fb2c76b88488f00a4fdfa7db49a57926772c06f06a729267ccbf9 not found: ID does not exist" containerID="a2b4edf9e90fb2c76b88488f00a4fdfa7db49a57926772c06f06a729267ccbf9" Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.615068 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2b4edf9e90fb2c76b88488f00a4fdfa7db49a57926772c06f06a729267ccbf9"} err="failed to get container status \"a2b4edf9e90fb2c76b88488f00a4fdfa7db49a57926772c06f06a729267ccbf9\": rpc error: code = NotFound desc = could not find container \"a2b4edf9e90fb2c76b88488f00a4fdfa7db49a57926772c06f06a729267ccbf9\": container with ID starting with a2b4edf9e90fb2c76b88488f00a4fdfa7db49a57926772c06f06a729267ccbf9 not found: ID does not exist" Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.668205 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrdrk\" (UniqueName: \"kubernetes.io/projected/572cdf9b-6953-4201-961f-5f2404993f44-kube-api-access-mrdrk\") on node \"crc\" DevicePath \"\"" Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.668253 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp79j\" (UniqueName: \"kubernetes.io/projected/f7cbb633-768c-4ab3-9243-252a24046c73-kube-api-access-zp79j\") on node \"crc\" DevicePath \"\"" Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.924676 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv"] Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.939728 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-56dcfd7757-sf2sv"] Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.961015 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6"] Feb 17 08:56:58 crc kubenswrapper[4813]: I0217 08:56:58.974287 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-776596fd4-6jgg6"] Feb 17 08:56:59 crc kubenswrapper[4813]: I0217 08:56:59.122875 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="572cdf9b-6953-4201-961f-5f2404993f44" path="/var/lib/kubelet/pods/572cdf9b-6953-4201-961f-5f2404993f44/volumes" Feb 17 08:56:59 crc kubenswrapper[4813]: I0217 08:56:59.124022 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7cbb633-768c-4ab3-9243-252a24046c73" path="/var/lib/kubelet/pods/f7cbb633-768c-4ab3-9243-252a24046c73/volumes" Feb 17 08:57:02 crc kubenswrapper[4813]: I0217 08:57:02.358807 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-jtsbh"] Feb 17 08:57:02 crc kubenswrapper[4813]: E0217 08:57:02.359465 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862ea87f-d6ca-40f3-8e76-2e6e4542e1b1" containerName="registry-server" Feb 17 08:57:02 crc kubenswrapper[4813]: I0217 08:57:02.359485 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="862ea87f-d6ca-40f3-8e76-2e6e4542e1b1" containerName="registry-server" Feb 17 08:57:02 crc kubenswrapper[4813]: E0217 08:57:02.359508 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572cdf9b-6953-4201-961f-5f2404993f44" containerName="operator" Feb 17 08:57:02 crc kubenswrapper[4813]: I0217 08:57:02.359519 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="572cdf9b-6953-4201-961f-5f2404993f44" containerName="operator" Feb 17 08:57:02 crc kubenswrapper[4813]: E0217 08:57:02.359538 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862ea87f-d6ca-40f3-8e76-2e6e4542e1b1" containerName="extract-content" Feb 17 08:57:02 crc kubenswrapper[4813]: I0217 08:57:02.359551 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="862ea87f-d6ca-40f3-8e76-2e6e4542e1b1" containerName="extract-content" Feb 17 08:57:02 crc kubenswrapper[4813]: E0217 08:57:02.359577 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862ea87f-d6ca-40f3-8e76-2e6e4542e1b1" containerName="extract-utilities" Feb 17 08:57:02 crc kubenswrapper[4813]: I0217 08:57:02.359587 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="862ea87f-d6ca-40f3-8e76-2e6e4542e1b1" containerName="extract-utilities" Feb 17 08:57:02 crc kubenswrapper[4813]: E0217 08:57:02.359607 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7cbb633-768c-4ab3-9243-252a24046c73" containerName="manager" Feb 17 08:57:02 crc kubenswrapper[4813]: I0217 08:57:02.359617 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7cbb633-768c-4ab3-9243-252a24046c73" containerName="manager" Feb 17 08:57:02 crc kubenswrapper[4813]: I0217 08:57:02.359866 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="862ea87f-d6ca-40f3-8e76-2e6e4542e1b1" containerName="registry-server" Feb 17 08:57:02 crc kubenswrapper[4813]: I0217 08:57:02.359886 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="572cdf9b-6953-4201-961f-5f2404993f44" containerName="operator" Feb 17 08:57:02 crc kubenswrapper[4813]: I0217 08:57:02.359897 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7cbb633-768c-4ab3-9243-252a24046c73" containerName="manager" Feb 17 08:57:02 crc kubenswrapper[4813]: I0217 08:57:02.360505 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-jtsbh" Feb 17 08:57:02 crc kubenswrapper[4813]: I0217 08:57:02.376899 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-index-dockercfg-25kqs" Feb 17 08:57:02 crc kubenswrapper[4813]: I0217 08:57:02.382079 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-jtsbh"] Feb 17 08:57:02 crc kubenswrapper[4813]: I0217 08:57:02.420604 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glf2f\" (UniqueName: \"kubernetes.io/projected/362049cf-9c30-4608-859d-99372ab77971-kube-api-access-glf2f\") pod \"watcher-operator-index-jtsbh\" (UID: \"362049cf-9c30-4608-859d-99372ab77971\") " pod="openstack-operators/watcher-operator-index-jtsbh" Feb 17 08:57:02 crc kubenswrapper[4813]: I0217 08:57:02.521486 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glf2f\" (UniqueName: \"kubernetes.io/projected/362049cf-9c30-4608-859d-99372ab77971-kube-api-access-glf2f\") pod \"watcher-operator-index-jtsbh\" (UID: \"362049cf-9c30-4608-859d-99372ab77971\") " pod="openstack-operators/watcher-operator-index-jtsbh" Feb 17 08:57:02 crc kubenswrapper[4813]: I0217 08:57:02.540138 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glf2f\" (UniqueName: \"kubernetes.io/projected/362049cf-9c30-4608-859d-99372ab77971-kube-api-access-glf2f\") pod \"watcher-operator-index-jtsbh\" (UID: \"362049cf-9c30-4608-859d-99372ab77971\") " pod="openstack-operators/watcher-operator-index-jtsbh" Feb 17 08:57:02 crc kubenswrapper[4813]: I0217 08:57:02.691192 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-jtsbh" Feb 17 08:57:03 crc kubenswrapper[4813]: I0217 08:57:03.275368 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-jtsbh"] Feb 17 08:57:03 crc kubenswrapper[4813]: W0217 08:57:03.279459 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod362049cf_9c30_4608_859d_99372ab77971.slice/crio-8577a179300e959f80e84e3730e9f918ac2d0485cc17db22b32b3daec86ac1b9 WatchSource:0}: Error finding container 8577a179300e959f80e84e3730e9f918ac2d0485cc17db22b32b3daec86ac1b9: Status 404 returned error can't find the container with id 8577a179300e959f80e84e3730e9f918ac2d0485cc17db22b32b3daec86ac1b9 Feb 17 08:57:03 crc kubenswrapper[4813]: I0217 08:57:03.598809 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-jtsbh" event={"ID":"362049cf-9c30-4608-859d-99372ab77971","Type":"ContainerStarted","Data":"8577a179300e959f80e84e3730e9f918ac2d0485cc17db22b32b3daec86ac1b9"} Feb 17 08:57:04 crc kubenswrapper[4813]: I0217 08:57:04.611526 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-jtsbh" event={"ID":"362049cf-9c30-4608-859d-99372ab77971","Type":"ContainerStarted","Data":"292a6bed87cb9ddc7a50c735c906051a12b6efd4f190ab004ea42716ca710262"} Feb 17 08:57:04 crc kubenswrapper[4813]: I0217 08:57:04.657951 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-jtsbh" podStartSLOduration=2.465430553 podStartE2EDuration="2.65792209s" podCreationTimestamp="2026-02-17 08:57:02 +0000 UTC" firstStartedPulling="2026-02-17 08:57:03.282697266 +0000 UTC m=+970.943458499" lastFinishedPulling="2026-02-17 08:57:03.475188803 +0000 UTC m=+971.135950036" observedRunningTime="2026-02-17 08:57:04.633523966 +0000 UTC m=+972.294285279" watchObservedRunningTime="2026-02-17 08:57:04.65792209 +0000 UTC m=+972.318683343" Feb 17 08:57:06 crc kubenswrapper[4813]: I0217 08:57:06.743783 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-jtsbh"] Feb 17 08:57:06 crc kubenswrapper[4813]: I0217 08:57:06.744109 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-index-jtsbh" podUID="362049cf-9c30-4608-859d-99372ab77971" containerName="registry-server" containerID="cri-o://292a6bed87cb9ddc7a50c735c906051a12b6efd4f190ab004ea42716ca710262" gracePeriod=2 Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.182029 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-jtsbh" Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.302069 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glf2f\" (UniqueName: \"kubernetes.io/projected/362049cf-9c30-4608-859d-99372ab77971-kube-api-access-glf2f\") pod \"362049cf-9c30-4608-859d-99372ab77971\" (UID: \"362049cf-9c30-4608-859d-99372ab77971\") " Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.307583 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/362049cf-9c30-4608-859d-99372ab77971-kube-api-access-glf2f" (OuterVolumeSpecName: "kube-api-access-glf2f") pod "362049cf-9c30-4608-859d-99372ab77971" (UID: "362049cf-9c30-4608-859d-99372ab77971"). InnerVolumeSpecName "kube-api-access-glf2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.351174 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-c6sg7"] Feb 17 08:57:07 crc kubenswrapper[4813]: E0217 08:57:07.351748 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362049cf-9c30-4608-859d-99372ab77971" containerName="registry-server" Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.351847 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="362049cf-9c30-4608-859d-99372ab77971" containerName="registry-server" Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.352138 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="362049cf-9c30-4608-859d-99372ab77971" containerName="registry-server" Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.352764 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-c6sg7" Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.369962 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-c6sg7"] Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.405417 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlgn6\" (UniqueName: \"kubernetes.io/projected/bc164bd8-d76c-4e04-b474-464d2e7785aa-kube-api-access-nlgn6\") pod \"watcher-operator-index-c6sg7\" (UID: \"bc164bd8-d76c-4e04-b474-464d2e7785aa\") " pod="openstack-operators/watcher-operator-index-c6sg7" Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.405652 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glf2f\" (UniqueName: \"kubernetes.io/projected/362049cf-9c30-4608-859d-99372ab77971-kube-api-access-glf2f\") on node \"crc\" DevicePath \"\"" Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.507649 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlgn6\" (UniqueName: \"kubernetes.io/projected/bc164bd8-d76c-4e04-b474-464d2e7785aa-kube-api-access-nlgn6\") pod \"watcher-operator-index-c6sg7\" (UID: \"bc164bd8-d76c-4e04-b474-464d2e7785aa\") " pod="openstack-operators/watcher-operator-index-c6sg7" Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.539804 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlgn6\" (UniqueName: \"kubernetes.io/projected/bc164bd8-d76c-4e04-b474-464d2e7785aa-kube-api-access-nlgn6\") pod \"watcher-operator-index-c6sg7\" (UID: \"bc164bd8-d76c-4e04-b474-464d2e7785aa\") " pod="openstack-operators/watcher-operator-index-c6sg7" Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.635233 4813 generic.go:334] "Generic (PLEG): container finished" podID="362049cf-9c30-4608-859d-99372ab77971" containerID="292a6bed87cb9ddc7a50c735c906051a12b6efd4f190ab004ea42716ca710262" exitCode=0 Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.635275 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-jtsbh" event={"ID":"362049cf-9c30-4608-859d-99372ab77971","Type":"ContainerDied","Data":"292a6bed87cb9ddc7a50c735c906051a12b6efd4f190ab004ea42716ca710262"} Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.635325 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-jtsbh" event={"ID":"362049cf-9c30-4608-859d-99372ab77971","Type":"ContainerDied","Data":"8577a179300e959f80e84e3730e9f918ac2d0485cc17db22b32b3daec86ac1b9"} Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.635341 4813 scope.go:117] "RemoveContainer" containerID="292a6bed87cb9ddc7a50c735c906051a12b6efd4f190ab004ea42716ca710262" Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.635948 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-jtsbh" Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.656461 4813 scope.go:117] "RemoveContainer" containerID="292a6bed87cb9ddc7a50c735c906051a12b6efd4f190ab004ea42716ca710262" Feb 17 08:57:07 crc kubenswrapper[4813]: E0217 08:57:07.656832 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"292a6bed87cb9ddc7a50c735c906051a12b6efd4f190ab004ea42716ca710262\": container with ID starting with 292a6bed87cb9ddc7a50c735c906051a12b6efd4f190ab004ea42716ca710262 not found: ID does not exist" containerID="292a6bed87cb9ddc7a50c735c906051a12b6efd4f190ab004ea42716ca710262" Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.656873 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"292a6bed87cb9ddc7a50c735c906051a12b6efd4f190ab004ea42716ca710262"} err="failed to get container status \"292a6bed87cb9ddc7a50c735c906051a12b6efd4f190ab004ea42716ca710262\": rpc error: code = NotFound desc = could not find container \"292a6bed87cb9ddc7a50c735c906051a12b6efd4f190ab004ea42716ca710262\": container with ID starting with 292a6bed87cb9ddc7a50c735c906051a12b6efd4f190ab004ea42716ca710262 not found: ID does not exist" Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.673524 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-jtsbh"] Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.681709 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-c6sg7" Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.683561 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-index-jtsbh"] Feb 17 08:57:07 crc kubenswrapper[4813]: I0217 08:57:07.946838 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-c6sg7"] Feb 17 08:57:08 crc kubenswrapper[4813]: I0217 08:57:08.649242 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-c6sg7" event={"ID":"bc164bd8-d76c-4e04-b474-464d2e7785aa","Type":"ContainerStarted","Data":"e4c84544588ecc7f01d5d10340942b98ddd8e199406e6e37deb2a89b4cac5f5c"} Feb 17 08:57:08 crc kubenswrapper[4813]: I0217 08:57:08.650248 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-c6sg7" event={"ID":"bc164bd8-d76c-4e04-b474-464d2e7785aa","Type":"ContainerStarted","Data":"db446837462641362592b8ced9bde5f62147831eb1f218a324e8f3e7d17356c6"} Feb 17 08:57:08 crc kubenswrapper[4813]: I0217 08:57:08.670364 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-c6sg7" podStartSLOduration=1.294808157 podStartE2EDuration="1.670342971s" podCreationTimestamp="2026-02-17 08:57:07 +0000 UTC" firstStartedPulling="2026-02-17 08:57:07.954910017 +0000 UTC m=+975.615671280" lastFinishedPulling="2026-02-17 08:57:08.330444831 +0000 UTC m=+975.991206094" observedRunningTime="2026-02-17 08:57:08.666644305 +0000 UTC m=+976.327405568" watchObservedRunningTime="2026-02-17 08:57:08.670342971 +0000 UTC m=+976.331104194" Feb 17 08:57:09 crc kubenswrapper[4813]: I0217 08:57:09.120195 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="362049cf-9c30-4608-859d-99372ab77971" path="/var/lib/kubelet/pods/362049cf-9c30-4608-859d-99372ab77971/volumes" Feb 17 08:57:17 crc kubenswrapper[4813]: I0217 08:57:17.683479 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/watcher-operator-index-c6sg7" Feb 17 08:57:17 crc kubenswrapper[4813]: I0217 08:57:17.685457 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-index-c6sg7" Feb 17 08:57:17 crc kubenswrapper[4813]: I0217 08:57:17.735226 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/watcher-operator-index-c6sg7" Feb 17 08:57:17 crc kubenswrapper[4813]: I0217 08:57:17.785792 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-index-c6sg7" Feb 17 08:57:20 crc kubenswrapper[4813]: I0217 08:57:20.011461 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm"] Feb 17 08:57:20 crc kubenswrapper[4813]: I0217 08:57:20.035042 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" Feb 17 08:57:20 crc kubenswrapper[4813]: I0217 08:57:20.039448 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-fb5f5" Feb 17 08:57:20 crc kubenswrapper[4813]: I0217 08:57:20.043128 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm"] Feb 17 08:57:20 crc kubenswrapper[4813]: I0217 08:57:20.087520 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-util\") pod \"512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm\" (UID: \"f6226cf6-afa1-48af-8aa9-6f0191f76fa6\") " pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" Feb 17 08:57:20 crc kubenswrapper[4813]: I0217 08:57:20.087588 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-bundle\") pod \"512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm\" (UID: \"f6226cf6-afa1-48af-8aa9-6f0191f76fa6\") " pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" Feb 17 08:57:20 crc kubenswrapper[4813]: I0217 08:57:20.087612 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm975\" (UniqueName: \"kubernetes.io/projected/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-kube-api-access-vm975\") pod \"512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm\" (UID: \"f6226cf6-afa1-48af-8aa9-6f0191f76fa6\") " pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" Feb 17 08:57:20 crc kubenswrapper[4813]: I0217 08:57:20.189502 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-util\") pod \"512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm\" (UID: \"f6226cf6-afa1-48af-8aa9-6f0191f76fa6\") " pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" Feb 17 08:57:20 crc kubenswrapper[4813]: I0217 08:57:20.189648 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-bundle\") pod \"512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm\" (UID: \"f6226cf6-afa1-48af-8aa9-6f0191f76fa6\") " pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" Feb 17 08:57:20 crc kubenswrapper[4813]: I0217 08:57:20.189695 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm975\" (UniqueName: \"kubernetes.io/projected/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-kube-api-access-vm975\") pod \"512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm\" (UID: \"f6226cf6-afa1-48af-8aa9-6f0191f76fa6\") " pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" Feb 17 08:57:20 crc kubenswrapper[4813]: I0217 08:57:20.190341 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-util\") pod \"512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm\" (UID: \"f6226cf6-afa1-48af-8aa9-6f0191f76fa6\") " pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" Feb 17 08:57:20 crc kubenswrapper[4813]: I0217 08:57:20.190525 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-bundle\") pod \"512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm\" (UID: \"f6226cf6-afa1-48af-8aa9-6f0191f76fa6\") " pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" Feb 17 08:57:20 crc kubenswrapper[4813]: I0217 08:57:20.212780 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm975\" (UniqueName: \"kubernetes.io/projected/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-kube-api-access-vm975\") pod \"512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm\" (UID: \"f6226cf6-afa1-48af-8aa9-6f0191f76fa6\") " pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" Feb 17 08:57:20 crc kubenswrapper[4813]: I0217 08:57:20.372229 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" Feb 17 08:57:20 crc kubenswrapper[4813]: I0217 08:57:20.855883 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm"] Feb 17 08:57:21 crc kubenswrapper[4813]: I0217 08:57:21.754280 4813 generic.go:334] "Generic (PLEG): container finished" podID="f6226cf6-afa1-48af-8aa9-6f0191f76fa6" containerID="83bb46d78b2294ffa94448d46f558cac434745e5b8821d997f26bd578a7a5d06" exitCode=0 Feb 17 08:57:21 crc kubenswrapper[4813]: I0217 08:57:21.754367 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" event={"ID":"f6226cf6-afa1-48af-8aa9-6f0191f76fa6","Type":"ContainerDied","Data":"83bb46d78b2294ffa94448d46f558cac434745e5b8821d997f26bd578a7a5d06"} Feb 17 08:57:21 crc kubenswrapper[4813]: I0217 08:57:21.754404 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" event={"ID":"f6226cf6-afa1-48af-8aa9-6f0191f76fa6","Type":"ContainerStarted","Data":"d3b7f936fd8753457d1ede3dd711a81070ac32e4bb0af2ae8fde708d4f339361"} Feb 17 08:57:22 crc kubenswrapper[4813]: I0217 08:57:22.767002 4813 generic.go:334] "Generic (PLEG): container finished" podID="f6226cf6-afa1-48af-8aa9-6f0191f76fa6" containerID="71e012ab24c0766f7e9758acfd5b7391715140b5372e00a4a6f2f9598383249f" exitCode=0 Feb 17 08:57:22 crc kubenswrapper[4813]: I0217 08:57:22.767130 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" event={"ID":"f6226cf6-afa1-48af-8aa9-6f0191f76fa6","Type":"ContainerDied","Data":"71e012ab24c0766f7e9758acfd5b7391715140b5372e00a4a6f2f9598383249f"} Feb 17 08:57:23 crc kubenswrapper[4813]: I0217 08:57:23.778011 4813 generic.go:334] "Generic (PLEG): container finished" podID="f6226cf6-afa1-48af-8aa9-6f0191f76fa6" containerID="e03fc29bfe168e0877d30aac20a70876f7f704847acad48c62a0edcfd239a023" exitCode=0 Feb 17 08:57:23 crc kubenswrapper[4813]: I0217 08:57:23.778104 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" event={"ID":"f6226cf6-afa1-48af-8aa9-6f0191f76fa6","Type":"ContainerDied","Data":"e03fc29bfe168e0877d30aac20a70876f7f704847acad48c62a0edcfd239a023"} Feb 17 08:57:25 crc kubenswrapper[4813]: I0217 08:57:25.200003 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" Feb 17 08:57:25 crc kubenswrapper[4813]: I0217 08:57:25.275953 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm975\" (UniqueName: \"kubernetes.io/projected/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-kube-api-access-vm975\") pod \"f6226cf6-afa1-48af-8aa9-6f0191f76fa6\" (UID: \"f6226cf6-afa1-48af-8aa9-6f0191f76fa6\") " Feb 17 08:57:25 crc kubenswrapper[4813]: I0217 08:57:25.276233 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-bundle\") pod \"f6226cf6-afa1-48af-8aa9-6f0191f76fa6\" (UID: \"f6226cf6-afa1-48af-8aa9-6f0191f76fa6\") " Feb 17 08:57:25 crc kubenswrapper[4813]: I0217 08:57:25.276298 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-util\") pod \"f6226cf6-afa1-48af-8aa9-6f0191f76fa6\" (UID: \"f6226cf6-afa1-48af-8aa9-6f0191f76fa6\") " Feb 17 08:57:25 crc kubenswrapper[4813]: I0217 08:57:25.278218 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-bundle" (OuterVolumeSpecName: "bundle") pod "f6226cf6-afa1-48af-8aa9-6f0191f76fa6" (UID: "f6226cf6-afa1-48af-8aa9-6f0191f76fa6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:57:25 crc kubenswrapper[4813]: I0217 08:57:25.287097 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-kube-api-access-vm975" (OuterVolumeSpecName: "kube-api-access-vm975") pod "f6226cf6-afa1-48af-8aa9-6f0191f76fa6" (UID: "f6226cf6-afa1-48af-8aa9-6f0191f76fa6"). InnerVolumeSpecName "kube-api-access-vm975". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:57:25 crc kubenswrapper[4813]: I0217 08:57:25.301764 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-util" (OuterVolumeSpecName: "util") pod "f6226cf6-afa1-48af-8aa9-6f0191f76fa6" (UID: "f6226cf6-afa1-48af-8aa9-6f0191f76fa6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:57:25 crc kubenswrapper[4813]: I0217 08:57:25.378863 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:57:25 crc kubenswrapper[4813]: I0217 08:57:25.379114 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-util\") on node \"crc\" DevicePath \"\"" Feb 17 08:57:25 crc kubenswrapper[4813]: I0217 08:57:25.379243 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm975\" (UniqueName: \"kubernetes.io/projected/f6226cf6-afa1-48af-8aa9-6f0191f76fa6-kube-api-access-vm975\") on node \"crc\" DevicePath \"\"" Feb 17 08:57:25 crc kubenswrapper[4813]: I0217 08:57:25.795389 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" event={"ID":"f6226cf6-afa1-48af-8aa9-6f0191f76fa6","Type":"ContainerDied","Data":"d3b7f936fd8753457d1ede3dd711a81070ac32e4bb0af2ae8fde708d4f339361"} Feb 17 08:57:25 crc kubenswrapper[4813]: I0217 08:57:25.795678 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3b7f936fd8753457d1ede3dd711a81070ac32e4bb0af2ae8fde708d4f339361" Feb 17 08:57:25 crc kubenswrapper[4813]: I0217 08:57:25.795422 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.313418 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf"] Feb 17 08:57:31 crc kubenswrapper[4813]: E0217 08:57:31.314063 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6226cf6-afa1-48af-8aa9-6f0191f76fa6" containerName="util" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.314074 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6226cf6-afa1-48af-8aa9-6f0191f76fa6" containerName="util" Feb 17 08:57:31 crc kubenswrapper[4813]: E0217 08:57:31.314089 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6226cf6-afa1-48af-8aa9-6f0191f76fa6" containerName="pull" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.314095 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6226cf6-afa1-48af-8aa9-6f0191f76fa6" containerName="pull" Feb 17 08:57:31 crc kubenswrapper[4813]: E0217 08:57:31.314103 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6226cf6-afa1-48af-8aa9-6f0191f76fa6" containerName="extract" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.314109 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6226cf6-afa1-48af-8aa9-6f0191f76fa6" containerName="extract" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.314235 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6226cf6-afa1-48af-8aa9-6f0191f76fa6" containerName="extract" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.314781 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.316572 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-znkld" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.316853 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-service-cert" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.325889 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf"] Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.394494 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-webhook-cert\") pod \"watcher-operator-controller-manager-6995b9d9d7-g2qzf\" (UID: \"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e\") " pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.394623 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-apiservice-cert\") pod \"watcher-operator-controller-manager-6995b9d9d7-g2qzf\" (UID: \"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e\") " pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.394663 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8fzf\" (UniqueName: \"kubernetes.io/projected/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-kube-api-access-f8fzf\") pod \"watcher-operator-controller-manager-6995b9d9d7-g2qzf\" (UID: \"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e\") " pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.496384 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8fzf\" (UniqueName: \"kubernetes.io/projected/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-kube-api-access-f8fzf\") pod \"watcher-operator-controller-manager-6995b9d9d7-g2qzf\" (UID: \"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e\") " pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.496460 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-webhook-cert\") pod \"watcher-operator-controller-manager-6995b9d9d7-g2qzf\" (UID: \"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e\") " pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.496644 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-apiservice-cert\") pod \"watcher-operator-controller-manager-6995b9d9d7-g2qzf\" (UID: \"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e\") " pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.508439 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-apiservice-cert\") pod \"watcher-operator-controller-manager-6995b9d9d7-g2qzf\" (UID: \"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e\") " pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.508814 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-webhook-cert\") pod \"watcher-operator-controller-manager-6995b9d9d7-g2qzf\" (UID: \"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e\") " pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.510971 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8fzf\" (UniqueName: \"kubernetes.io/projected/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-kube-api-access-f8fzf\") pod \"watcher-operator-controller-manager-6995b9d9d7-g2qzf\" (UID: \"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e\") " pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" Feb 17 08:57:31 crc kubenswrapper[4813]: I0217 08:57:31.632062 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" Feb 17 08:57:32 crc kubenswrapper[4813]: I0217 08:57:32.144644 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf"] Feb 17 08:57:32 crc kubenswrapper[4813]: W0217 08:57:32.152234 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9c1eae2_1dbf_4550_a627_e65f8bac9a2e.slice/crio-e24a0564a69d9154122735e4b1ae6abaccecafc29eabc647d251e83109126844 WatchSource:0}: Error finding container e24a0564a69d9154122735e4b1ae6abaccecafc29eabc647d251e83109126844: Status 404 returned error can't find the container with id e24a0564a69d9154122735e4b1ae6abaccecafc29eabc647d251e83109126844 Feb 17 08:57:32 crc kubenswrapper[4813]: I0217 08:57:32.853341 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" event={"ID":"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e","Type":"ContainerStarted","Data":"656d80d50ec7aa7f6387fbd58a62b72279d8438ea50d7583c3edc13b9987f328"} Feb 17 08:57:32 crc kubenswrapper[4813]: I0217 08:57:32.853424 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" event={"ID":"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e","Type":"ContainerStarted","Data":"e24a0564a69d9154122735e4b1ae6abaccecafc29eabc647d251e83109126844"} Feb 17 08:57:32 crc kubenswrapper[4813]: I0217 08:57:32.853537 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" Feb 17 08:57:32 crc kubenswrapper[4813]: I0217 08:57:32.878395 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" podStartSLOduration=1.878367661 podStartE2EDuration="1.878367661s" podCreationTimestamp="2026-02-17 08:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:57:32.873558845 +0000 UTC m=+1000.534320118" watchObservedRunningTime="2026-02-17 08:57:32.878367661 +0000 UTC m=+1000.539128904" Feb 17 08:57:41 crc kubenswrapper[4813]: I0217 08:57:41.638560 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" Feb 17 08:57:43 crc kubenswrapper[4813]: I0217 08:57:43.565068 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf"] Feb 17 08:57:43 crc kubenswrapper[4813]: I0217 08:57:43.566874 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf" Feb 17 08:57:43 crc kubenswrapper[4813]: I0217 08:57:43.586099 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf"] Feb 17 08:57:43 crc kubenswrapper[4813]: I0217 08:57:43.675280 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l66t5\" (UniqueName: \"kubernetes.io/projected/4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b-kube-api-access-l66t5\") pod \"watcher-operator-controller-manager-5fc5bdfc99-vm9tf\" (UID: \"4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b\") " pod="openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf" Feb 17 08:57:43 crc kubenswrapper[4813]: I0217 08:57:43.675329 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b-webhook-cert\") pod \"watcher-operator-controller-manager-5fc5bdfc99-vm9tf\" (UID: \"4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b\") " pod="openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf" Feb 17 08:57:43 crc kubenswrapper[4813]: I0217 08:57:43.675518 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b-apiservice-cert\") pod \"watcher-operator-controller-manager-5fc5bdfc99-vm9tf\" (UID: \"4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b\") " pod="openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf" Feb 17 08:57:43 crc kubenswrapper[4813]: I0217 08:57:43.777059 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b-apiservice-cert\") pod \"watcher-operator-controller-manager-5fc5bdfc99-vm9tf\" (UID: \"4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b\") " pod="openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf" Feb 17 08:57:43 crc kubenswrapper[4813]: I0217 08:57:43.777163 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l66t5\" (UniqueName: \"kubernetes.io/projected/4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b-kube-api-access-l66t5\") pod \"watcher-operator-controller-manager-5fc5bdfc99-vm9tf\" (UID: \"4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b\") " pod="openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf" Feb 17 08:57:43 crc kubenswrapper[4813]: I0217 08:57:43.777182 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b-webhook-cert\") pod \"watcher-operator-controller-manager-5fc5bdfc99-vm9tf\" (UID: \"4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b\") " pod="openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf" Feb 17 08:57:43 crc kubenswrapper[4813]: I0217 08:57:43.790472 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b-apiservice-cert\") pod \"watcher-operator-controller-manager-5fc5bdfc99-vm9tf\" (UID: \"4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b\") " pod="openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf" Feb 17 08:57:43 crc kubenswrapper[4813]: I0217 08:57:43.790528 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b-webhook-cert\") pod \"watcher-operator-controller-manager-5fc5bdfc99-vm9tf\" (UID: \"4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b\") " pod="openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf" Feb 17 08:57:43 crc kubenswrapper[4813]: I0217 08:57:43.792010 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l66t5\" (UniqueName: \"kubernetes.io/projected/4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b-kube-api-access-l66t5\") pod \"watcher-operator-controller-manager-5fc5bdfc99-vm9tf\" (UID: \"4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b\") " pod="openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf" Feb 17 08:57:43 crc kubenswrapper[4813]: I0217 08:57:43.889844 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf" Feb 17 08:57:44 crc kubenswrapper[4813]: I0217 08:57:44.399656 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf"] Feb 17 08:57:44 crc kubenswrapper[4813]: I0217 08:57:44.965106 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf" event={"ID":"4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b","Type":"ContainerStarted","Data":"f5de05a1436b3873e5827883d42da048a56298ac00aa44e0177726d3a7b3f5e3"} Feb 17 08:57:44 crc kubenswrapper[4813]: I0217 08:57:44.965611 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf" Feb 17 08:57:44 crc kubenswrapper[4813]: I0217 08:57:44.965637 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf" event={"ID":"4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b","Type":"ContainerStarted","Data":"3fdd6bce0ee0dd5bd0ba3c0b2f428c3f47e89271ef11a1c425d6635df557bd1b"} Feb 17 08:57:44 crc kubenswrapper[4813]: I0217 08:57:44.985162 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf" podStartSLOduration=1.98514261 podStartE2EDuration="1.98514261s" podCreationTimestamp="2026-02-17 08:57:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:57:44.979524891 +0000 UTC m=+1012.640286124" watchObservedRunningTime="2026-02-17 08:57:44.98514261 +0000 UTC m=+1012.645903853" Feb 17 08:57:53 crc kubenswrapper[4813]: I0217 08:57:53.894684 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5fc5bdfc99-vm9tf" Feb 17 08:57:53 crc kubenswrapper[4813]: I0217 08:57:53.968238 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf"] Feb 17 08:57:53 crc kubenswrapper[4813]: I0217 08:57:53.968785 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" podUID="d9c1eae2-1dbf-4550-a627-e65f8bac9a2e" containerName="manager" containerID="cri-o://656d80d50ec7aa7f6387fbd58a62b72279d8438ea50d7583c3edc13b9987f328" gracePeriod=10 Feb 17 08:57:54 crc kubenswrapper[4813]: I0217 08:57:54.461083 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" Feb 17 08:57:54 crc kubenswrapper[4813]: I0217 08:57:54.658068 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-apiservice-cert\") pod \"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e\" (UID: \"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e\") " Feb 17 08:57:54 crc kubenswrapper[4813]: I0217 08:57:54.658144 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-webhook-cert\") pod \"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e\" (UID: \"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e\") " Feb 17 08:57:54 crc kubenswrapper[4813]: I0217 08:57:54.658442 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8fzf\" (UniqueName: \"kubernetes.io/projected/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-kube-api-access-f8fzf\") pod \"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e\" (UID: \"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e\") " Feb 17 08:57:54 crc kubenswrapper[4813]: I0217 08:57:54.664067 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d9c1eae2-1dbf-4550-a627-e65f8bac9a2e" (UID: "d9c1eae2-1dbf-4550-a627-e65f8bac9a2e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:57:54 crc kubenswrapper[4813]: I0217 08:57:54.664861 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-kube-api-access-f8fzf" (OuterVolumeSpecName: "kube-api-access-f8fzf") pod "d9c1eae2-1dbf-4550-a627-e65f8bac9a2e" (UID: "d9c1eae2-1dbf-4550-a627-e65f8bac9a2e"). InnerVolumeSpecName "kube-api-access-f8fzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:57:54 crc kubenswrapper[4813]: I0217 08:57:54.665559 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "d9c1eae2-1dbf-4550-a627-e65f8bac9a2e" (UID: "d9c1eae2-1dbf-4550-a627-e65f8bac9a2e"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:57:54 crc kubenswrapper[4813]: I0217 08:57:54.760023 4813 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:57:54 crc kubenswrapper[4813]: I0217 08:57:54.760057 4813 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:57:54 crc kubenswrapper[4813]: I0217 08:57:54.760067 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8fzf\" (UniqueName: \"kubernetes.io/projected/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e-kube-api-access-f8fzf\") on node \"crc\" DevicePath \"\"" Feb 17 08:57:55 crc kubenswrapper[4813]: I0217 08:57:55.041963 4813 generic.go:334] "Generic (PLEG): container finished" podID="d9c1eae2-1dbf-4550-a627-e65f8bac9a2e" containerID="656d80d50ec7aa7f6387fbd58a62b72279d8438ea50d7583c3edc13b9987f328" exitCode=0 Feb 17 08:57:55 crc kubenswrapper[4813]: I0217 08:57:55.042022 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" event={"ID":"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e","Type":"ContainerDied","Data":"656d80d50ec7aa7f6387fbd58a62b72279d8438ea50d7583c3edc13b9987f328"} Feb 17 08:57:55 crc kubenswrapper[4813]: I0217 08:57:55.042081 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" event={"ID":"d9c1eae2-1dbf-4550-a627-e65f8bac9a2e","Type":"ContainerDied","Data":"e24a0564a69d9154122735e4b1ae6abaccecafc29eabc647d251e83109126844"} Feb 17 08:57:55 crc kubenswrapper[4813]: I0217 08:57:55.042109 4813 scope.go:117] "RemoveContainer" containerID="656d80d50ec7aa7f6387fbd58a62b72279d8438ea50d7583c3edc13b9987f328" Feb 17 08:57:55 crc kubenswrapper[4813]: I0217 08:57:55.042115 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf" Feb 17 08:57:55 crc kubenswrapper[4813]: I0217 08:57:55.071018 4813 scope.go:117] "RemoveContainer" containerID="656d80d50ec7aa7f6387fbd58a62b72279d8438ea50d7583c3edc13b9987f328" Feb 17 08:57:55 crc kubenswrapper[4813]: E0217 08:57:55.071722 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"656d80d50ec7aa7f6387fbd58a62b72279d8438ea50d7583c3edc13b9987f328\": container with ID starting with 656d80d50ec7aa7f6387fbd58a62b72279d8438ea50d7583c3edc13b9987f328 not found: ID does not exist" containerID="656d80d50ec7aa7f6387fbd58a62b72279d8438ea50d7583c3edc13b9987f328" Feb 17 08:57:55 crc kubenswrapper[4813]: I0217 08:57:55.071780 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656d80d50ec7aa7f6387fbd58a62b72279d8438ea50d7583c3edc13b9987f328"} err="failed to get container status \"656d80d50ec7aa7f6387fbd58a62b72279d8438ea50d7583c3edc13b9987f328\": rpc error: code = NotFound desc = could not find container \"656d80d50ec7aa7f6387fbd58a62b72279d8438ea50d7583c3edc13b9987f328\": container with ID starting with 656d80d50ec7aa7f6387fbd58a62b72279d8438ea50d7583c3edc13b9987f328 not found: ID does not exist" Feb 17 08:57:55 crc kubenswrapper[4813]: I0217 08:57:55.094163 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf"] Feb 17 08:57:55 crc kubenswrapper[4813]: I0217 08:57:55.101733 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6995b9d9d7-g2qzf"] Feb 17 08:57:55 crc kubenswrapper[4813]: I0217 08:57:55.129262 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9c1eae2-1dbf-4550-a627-e65f8bac9a2e" path="/var/lib/kubelet/pods/d9c1eae2-1dbf-4550-a627-e65f8bac9a2e/volumes" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.148143 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Feb 17 08:58:09 crc kubenswrapper[4813]: E0217 08:58:09.150512 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c1eae2-1dbf-4550-a627-e65f8bac9a2e" containerName="manager" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.150536 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c1eae2-1dbf-4550-a627-e65f8bac9a2e" containerName="manager" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.150781 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c1eae2-1dbf-4550-a627-e65f8bac9a2e" containerName="manager" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.151981 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.154593 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-erlang-cookie" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.155210 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-plugins-conf" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.155493 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-config-data" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.155825 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-conf" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.156132 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-default-user" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.156371 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"kube-root-ca.crt" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.156595 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-notifications-svc" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.156958 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-dockercfg-k4bkv" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.157103 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openshift-service-ca.crt" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.196107 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.289255 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9a8ae31-c620-49c7-9752-6b045300e16a-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.289757 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9a8ae31-c620-49c7-9752-6b045300e16a-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.289870 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9a8ae31-c620-49c7-9752-6b045300e16a-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.289941 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a420de3c-e499-4935-9ac8-f3ace192ae09\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a420de3c-e499-4935-9ac8-f3ace192ae09\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.289991 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9a8ae31-c620-49c7-9752-6b045300e16a-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.290065 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9a8ae31-c620-49c7-9752-6b045300e16a-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.290131 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9a8ae31-c620-49c7-9752-6b045300e16a-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.290175 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9a8ae31-c620-49c7-9752-6b045300e16a-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.290260 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhlgc\" (UniqueName: \"kubernetes.io/projected/a9a8ae31-c620-49c7-9752-6b045300e16a-kube-api-access-vhlgc\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.290343 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9a8ae31-c620-49c7-9752-6b045300e16a-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.290421 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9a8ae31-c620-49c7-9752-6b045300e16a-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.392275 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9a8ae31-c620-49c7-9752-6b045300e16a-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.392390 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9a8ae31-c620-49c7-9752-6b045300e16a-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.392438 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a420de3c-e499-4935-9ac8-f3ace192ae09\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a420de3c-e499-4935-9ac8-f3ace192ae09\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.392477 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9a8ae31-c620-49c7-9752-6b045300e16a-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.392528 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9a8ae31-c620-49c7-9752-6b045300e16a-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.392590 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9a8ae31-c620-49c7-9752-6b045300e16a-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.392638 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9a8ae31-c620-49c7-9752-6b045300e16a-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.392698 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhlgc\" (UniqueName: \"kubernetes.io/projected/a9a8ae31-c620-49c7-9752-6b045300e16a-kube-api-access-vhlgc\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.392750 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9a8ae31-c620-49c7-9752-6b045300e16a-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.392823 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9a8ae31-c620-49c7-9752-6b045300e16a-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.392931 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9a8ae31-c620-49c7-9752-6b045300e16a-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.394645 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9a8ae31-c620-49c7-9752-6b045300e16a-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.394923 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9a8ae31-c620-49c7-9752-6b045300e16a-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.395715 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9a8ae31-c620-49c7-9752-6b045300e16a-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.396188 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9a8ae31-c620-49c7-9752-6b045300e16a-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.396843 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9a8ae31-c620-49c7-9752-6b045300e16a-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.397348 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.397393 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a420de3c-e499-4935-9ac8-f3ace192ae09\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a420de3c-e499-4935-9ac8-f3ace192ae09\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7feace963b496133861bb4eb3ca2a1fd0de241f01d47b83998e4f0d3dea4d93b/globalmount\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.400952 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9a8ae31-c620-49c7-9752-6b045300e16a-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.401938 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9a8ae31-c620-49c7-9752-6b045300e16a-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.403765 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9a8ae31-c620-49c7-9752-6b045300e16a-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.405807 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9a8ae31-c620-49c7-9752-6b045300e16a-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.428874 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhlgc\" (UniqueName: \"kubernetes.io/projected/a9a8ae31-c620-49c7-9752-6b045300e16a-kube-api-access-vhlgc\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.443537 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a420de3c-e499-4935-9ac8-f3ace192ae09\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a420de3c-e499-4935-9ac8-f3ace192ae09\") pod \"rabbitmq-notifications-server-0\" (UID: \"a9a8ae31-c620-49c7-9752-6b045300e16a\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.475543 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.594372 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.596235 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.600705 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-default-user" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.600937 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-erlang-cookie" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.601102 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-plugins-conf" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.601387 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-server-dockercfg-wf5q4" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.601635 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-config-data" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.601783 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-server-conf" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.602106 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-svc" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.620194 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.699768 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67e1e624-5191-497b-9c9a-ff9d281c0871-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.699819 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67e1e624-5191-497b-9c9a-ff9d281c0871-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.699867 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67e1e624-5191-497b-9c9a-ff9d281c0871-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.699903 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6dr9\" (UniqueName: \"kubernetes.io/projected/67e1e624-5191-497b-9c9a-ff9d281c0871-kube-api-access-c6dr9\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.699925 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67e1e624-5191-497b-9c9a-ff9d281c0871-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.699943 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67e1e624-5191-497b-9c9a-ff9d281c0871-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.699972 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67e1e624-5191-497b-9c9a-ff9d281c0871-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.700004 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67e1e624-5191-497b-9c9a-ff9d281c0871-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.700053 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6c3770b8-675a-4eea-a786-c3d024a3112d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6c3770b8-675a-4eea-a786-c3d024a3112d\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.700098 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67e1e624-5191-497b-9c9a-ff9d281c0871-config-data\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.700121 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67e1e624-5191-497b-9c9a-ff9d281c0871-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.801746 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6c3770b8-675a-4eea-a786-c3d024a3112d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6c3770b8-675a-4eea-a786-c3d024a3112d\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.801821 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67e1e624-5191-497b-9c9a-ff9d281c0871-config-data\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.801849 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67e1e624-5191-497b-9c9a-ff9d281c0871-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.801878 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67e1e624-5191-497b-9c9a-ff9d281c0871-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.801896 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67e1e624-5191-497b-9c9a-ff9d281c0871-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.802058 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67e1e624-5191-497b-9c9a-ff9d281c0871-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.802658 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6dr9\" (UniqueName: \"kubernetes.io/projected/67e1e624-5191-497b-9c9a-ff9d281c0871-kube-api-access-c6dr9\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.802686 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67e1e624-5191-497b-9c9a-ff9d281c0871-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.802709 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67e1e624-5191-497b-9c9a-ff9d281c0871-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.802735 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67e1e624-5191-497b-9c9a-ff9d281c0871-config-data\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.802744 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67e1e624-5191-497b-9c9a-ff9d281c0871-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.802796 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67e1e624-5191-497b-9c9a-ff9d281c0871-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.803478 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67e1e624-5191-497b-9c9a-ff9d281c0871-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.803590 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67e1e624-5191-497b-9c9a-ff9d281c0871-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.803890 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67e1e624-5191-497b-9c9a-ff9d281c0871-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.804016 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67e1e624-5191-497b-9c9a-ff9d281c0871-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.806626 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67e1e624-5191-497b-9c9a-ff9d281c0871-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.807078 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67e1e624-5191-497b-9c9a-ff9d281c0871-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.807280 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67e1e624-5191-497b-9c9a-ff9d281c0871-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.807382 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.807525 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6c3770b8-675a-4eea-a786-c3d024a3112d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6c3770b8-675a-4eea-a786-c3d024a3112d\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6d298322a3021a854b8671696b14c461a6199c59d98bdbdaebf2102877abedd6/globalmount\"" pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.816823 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67e1e624-5191-497b-9c9a-ff9d281c0871-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.824449 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6dr9\" (UniqueName: \"kubernetes.io/projected/67e1e624-5191-497b-9c9a-ff9d281c0871-kube-api-access-c6dr9\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.846838 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6c3770b8-675a-4eea-a786-c3d024a3112d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6c3770b8-675a-4eea-a786-c3d024a3112d\") pod \"rabbitmq-server-0\" (UID: \"67e1e624-5191-497b-9c9a-ff9d281c0871\") " pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.927451 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:09 crc kubenswrapper[4813]: I0217 08:58:09.985363 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Feb 17 08:58:10 crc kubenswrapper[4813]: I0217 08:58:10.161779 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"a9a8ae31-c620-49c7-9752-6b045300e16a","Type":"ContainerStarted","Data":"d29667f615e7014fd9459001b43ce46333886329606d45ae5b8dd4fb6764d81f"} Feb 17 08:58:10 crc kubenswrapper[4813]: I0217 08:58:10.404416 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Feb 17 08:58:10 crc kubenswrapper[4813]: W0217 08:58:10.405800 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67e1e624_5191_497b_9c9a_ff9d281c0871.slice/crio-018e2fd6dc29638ab65c3571eaf171e3f2bcf4587638e8b4ae6f02a1f8eea732 WatchSource:0}: Error finding container 018e2fd6dc29638ab65c3571eaf171e3f2bcf4587638e8b4ae6f02a1f8eea732: Status 404 returned error can't find the container with id 018e2fd6dc29638ab65c3571eaf171e3f2bcf4587638e8b4ae6f02a1f8eea732 Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.176115 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"67e1e624-5191-497b-9c9a-ff9d281c0871","Type":"ContainerStarted","Data":"018e2fd6dc29638ab65c3571eaf171e3f2bcf4587638e8b4ae6f02a1f8eea732"} Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.197319 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.199119 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.204555 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-galera-openstack-svc" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.205094 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config-data" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.205425 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"galera-openstack-dockercfg-td2dp" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.205774 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.205827 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-scripts" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.209349 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"combined-ca-bundle" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.323577 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a68bb5-cac8-4f13-892a-272304837676-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.323624 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj9dh\" (UniqueName: \"kubernetes.io/projected/e3a68bb5-cac8-4f13-892a-272304837676-kube-api-access-mj9dh\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.323687 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3a68bb5-cac8-4f13-892a-272304837676-kolla-config\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.323708 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e3a68bb5-cac8-4f13-892a-272304837676-config-data-default\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.323730 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a68bb5-cac8-4f13-892a-272304837676-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.323756 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3a68bb5-cac8-4f13-892a-272304837676-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.323781 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a5563d7-54c6-41a4-b0fd-3d7dcdbe35da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a5563d7-54c6-41a4-b0fd-3d7dcdbe35da\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.323802 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e3a68bb5-cac8-4f13-892a-272304837676-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.425091 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a5563d7-54c6-41a4-b0fd-3d7dcdbe35da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a5563d7-54c6-41a4-b0fd-3d7dcdbe35da\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.425142 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e3a68bb5-cac8-4f13-892a-272304837676-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.425182 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a68bb5-cac8-4f13-892a-272304837676-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.425198 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj9dh\" (UniqueName: \"kubernetes.io/projected/e3a68bb5-cac8-4f13-892a-272304837676-kube-api-access-mj9dh\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.425248 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3a68bb5-cac8-4f13-892a-272304837676-kolla-config\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.425269 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e3a68bb5-cac8-4f13-892a-272304837676-config-data-default\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.425289 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a68bb5-cac8-4f13-892a-272304837676-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.425329 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3a68bb5-cac8-4f13-892a-272304837676-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.426527 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e3a68bb5-cac8-4f13-892a-272304837676-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.426978 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e3a68bb5-cac8-4f13-892a-272304837676-kolla-config\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.427687 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3a68bb5-cac8-4f13-892a-272304837676-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.429278 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e3a68bb5-cac8-4f13-892a-272304837676-config-data-default\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.431853 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3a68bb5-cac8-4f13-892a-272304837676-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.431968 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a68bb5-cac8-4f13-892a-272304837676-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.432167 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.432193 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a5563d7-54c6-41a4-b0fd-3d7dcdbe35da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a5563d7-54c6-41a4-b0fd-3d7dcdbe35da\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e95dc7d58db7ef006bc4154d1fd77deda859590ba2852fc08400eeb8f8bfbd1b/globalmount\"" pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.461519 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj9dh\" (UniqueName: \"kubernetes.io/projected/e3a68bb5-cac8-4f13-892a-272304837676-kube-api-access-mj9dh\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.462271 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a5563d7-54c6-41a4-b0fd-3d7dcdbe35da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a5563d7-54c6-41a4-b0fd-3d7dcdbe35da\") pod \"openstack-galera-0\" (UID: \"e3a68bb5-cac8-4f13-892a-272304837676\") " pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.521019 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.572731 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.573658 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.575256 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-vcvwm" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.575523 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.575727 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.596805 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.628504 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/afb01180-51b9-47b4-8f48-8ec2ce45c286-memcached-tls-certs\") pod \"memcached-0\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.628546 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/afb01180-51b9-47b4-8f48-8ec2ce45c286-kolla-config\") pod \"memcached-0\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.628585 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb01180-51b9-47b4-8f48-8ec2ce45c286-combined-ca-bundle\") pod \"memcached-0\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.628617 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j7vz\" (UniqueName: \"kubernetes.io/projected/afb01180-51b9-47b4-8f48-8ec2ce45c286-kube-api-access-8j7vz\") pod \"memcached-0\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.628645 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/afb01180-51b9-47b4-8f48-8ec2ce45c286-config-data\") pod \"memcached-0\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.731049 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb01180-51b9-47b4-8f48-8ec2ce45c286-combined-ca-bundle\") pod \"memcached-0\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.731417 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j7vz\" (UniqueName: \"kubernetes.io/projected/afb01180-51b9-47b4-8f48-8ec2ce45c286-kube-api-access-8j7vz\") pod \"memcached-0\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.731464 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/afb01180-51b9-47b4-8f48-8ec2ce45c286-config-data\") pod \"memcached-0\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.731548 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/afb01180-51b9-47b4-8f48-8ec2ce45c286-memcached-tls-certs\") pod \"memcached-0\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.731575 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/afb01180-51b9-47b4-8f48-8ec2ce45c286-kolla-config\") pod \"memcached-0\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.732634 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/afb01180-51b9-47b4-8f48-8ec2ce45c286-kolla-config\") pod \"memcached-0\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.732691 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/afb01180-51b9-47b4-8f48-8ec2ce45c286-config-data\") pod \"memcached-0\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.734330 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb01180-51b9-47b4-8f48-8ec2ce45c286-combined-ca-bundle\") pod \"memcached-0\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.739337 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/afb01180-51b9-47b4-8f48-8ec2ce45c286-memcached-tls-certs\") pod \"memcached-0\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.747485 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j7vz\" (UniqueName: \"kubernetes.io/projected/afb01180-51b9-47b4-8f48-8ec2ce45c286-kube-api-access-8j7vz\") pod \"memcached-0\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.929390 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.945373 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.946439 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.950945 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 17 08:58:11 crc kubenswrapper[4813]: I0217 08:58:11.954705 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"telemetry-ceilometer-dockercfg-l7rgh" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.037251 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvgtl\" (UniqueName: \"kubernetes.io/projected/a1969f4f-4612-43d8-bd57-6be840c9d815-kube-api-access-wvgtl\") pod \"kube-state-metrics-0\" (UID: \"a1969f4f-4612-43d8-bd57-6be840c9d815\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.102260 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Feb 17 08:58:12 crc kubenswrapper[4813]: W0217 08:58:12.123228 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3a68bb5_cac8_4f13_892a_272304837676.slice/crio-5f603d7f3f2b686a3dd60e8fd41cc280b70e708f16f3f1ca71278233f9044f41 WatchSource:0}: Error finding container 5f603d7f3f2b686a3dd60e8fd41cc280b70e708f16f3f1ca71278233f9044f41: Status 404 returned error can't find the container with id 5f603d7f3f2b686a3dd60e8fd41cc280b70e708f16f3f1ca71278233f9044f41 Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.138334 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvgtl\" (UniqueName: \"kubernetes.io/projected/a1969f4f-4612-43d8-bd57-6be840c9d815-kube-api-access-wvgtl\") pod \"kube-state-metrics-0\" (UID: \"a1969f4f-4612-43d8-bd57-6be840c9d815\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.177180 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvgtl\" (UniqueName: \"kubernetes.io/projected/a1969f4f-4612-43d8-bd57-6be840c9d815-kube-api-access-wvgtl\") pod \"kube-state-metrics-0\" (UID: \"a1969f4f-4612-43d8-bd57-6be840c9d815\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.195211 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"e3a68bb5-cac8-4f13-892a-272304837676","Type":"ContainerStarted","Data":"5f603d7f3f2b686a3dd60e8fd41cc280b70e708f16f3f1ca71278233f9044f41"} Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.339820 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.494433 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 17 08:58:12 crc kubenswrapper[4813]: W0217 08:58:12.535581 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafb01180_51b9_47b4_8f48_8ec2ce45c286.slice/crio-7a785b158823d46aa76cde6f53a8482a706da384542919397904c612656577ad WatchSource:0}: Error finding container 7a785b158823d46aa76cde6f53a8482a706da384542919397904c612656577ad: Status 404 returned error can't find the container with id 7a785b158823d46aa76cde6f53a8482a706da384542919397904c612656577ad Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.560387 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.562107 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.566630 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-web-config" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.566653 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-generated" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.566655 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-cluster-tls-config" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.566655 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-tls-assets-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.566984 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-alertmanager-dockercfg-hmb9j" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.580746 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.648456 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1fbc11e5-9828-4421-bd30-bbf16df6be8f-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.648517 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/1fbc11e5-9828-4421-bd30-bbf16df6be8f-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.648550 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s992\" (UniqueName: \"kubernetes.io/projected/1fbc11e5-9828-4421-bd30-bbf16df6be8f-kube-api-access-7s992\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.648577 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1fbc11e5-9828-4421-bd30-bbf16df6be8f-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.648642 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/1fbc11e5-9828-4421-bd30-bbf16df6be8f-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.648687 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1fbc11e5-9828-4421-bd30-bbf16df6be8f-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.648769 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1fbc11e5-9828-4421-bd30-bbf16df6be8f-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.750371 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1fbc11e5-9828-4421-bd30-bbf16df6be8f-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.750421 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1fbc11e5-9828-4421-bd30-bbf16df6be8f-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.750447 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/1fbc11e5-9828-4421-bd30-bbf16df6be8f-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.750484 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s992\" (UniqueName: \"kubernetes.io/projected/1fbc11e5-9828-4421-bd30-bbf16df6be8f-kube-api-access-7s992\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.750503 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1fbc11e5-9828-4421-bd30-bbf16df6be8f-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.750549 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/1fbc11e5-9828-4421-bd30-bbf16df6be8f-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.750602 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1fbc11e5-9828-4421-bd30-bbf16df6be8f-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.751054 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/1fbc11e5-9828-4421-bd30-bbf16df6be8f-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.756073 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1fbc11e5-9828-4421-bd30-bbf16df6be8f-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.756360 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1fbc11e5-9828-4421-bd30-bbf16df6be8f-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.768510 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1fbc11e5-9828-4421-bd30-bbf16df6be8f-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.768670 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1fbc11e5-9828-4421-bd30-bbf16df6be8f-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.768821 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/1fbc11e5-9828-4421-bd30-bbf16df6be8f-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.782968 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s992\" (UniqueName: \"kubernetes.io/projected/1fbc11e5-9828-4421-bd30-bbf16df6be8f-kube-api-access-7s992\") pod \"alertmanager-metric-storage-0\" (UID: \"1fbc11e5-9828-4421-bd30-bbf16df6be8f\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.849696 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 17 08:58:12 crc kubenswrapper[4813]: W0217 08:58:12.859397 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1969f4f_4612_43d8_bd57_6be840c9d815.slice/crio-f7e7bb6f58298d86802c1728d414a800dca19892edef017a9ccc20bcb3c67d99 WatchSource:0}: Error finding container f7e7bb6f58298d86802c1728d414a800dca19892edef017a9ccc20bcb3c67d99: Status 404 returned error can't find the container with id f7e7bb6f58298d86802c1728d414a800dca19892edef017a9ccc20bcb3c67d99 Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.881717 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.974426 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-gh8k5"] Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.975965 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gh8k5" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.983642 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-rgc8k" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.983858 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 17 08:58:12 crc kubenswrapper[4813]: I0217 08:58:12.989232 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-gh8k5"] Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.058821 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/123ad867-ebe9-4ea6-acfe-82f25010549e-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-gh8k5\" (UID: \"123ad867-ebe9-4ea6-acfe-82f25010549e\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gh8k5" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.058925 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6fpm\" (UniqueName: \"kubernetes.io/projected/123ad867-ebe9-4ea6-acfe-82f25010549e-kube-api-access-h6fpm\") pod \"observability-ui-dashboards-66cbf594b5-gh8k5\" (UID: \"123ad867-ebe9-4ea6-acfe-82f25010549e\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gh8k5" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.160686 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6fpm\" (UniqueName: \"kubernetes.io/projected/123ad867-ebe9-4ea6-acfe-82f25010549e-kube-api-access-h6fpm\") pod \"observability-ui-dashboards-66cbf594b5-gh8k5\" (UID: \"123ad867-ebe9-4ea6-acfe-82f25010549e\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gh8k5" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.160799 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/123ad867-ebe9-4ea6-acfe-82f25010549e-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-gh8k5\" (UID: \"123ad867-ebe9-4ea6-acfe-82f25010549e\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gh8k5" Feb 17 08:58:13 crc kubenswrapper[4813]: E0217 08:58:13.160922 4813 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 17 08:58:13 crc kubenswrapper[4813]: E0217 08:58:13.160978 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/123ad867-ebe9-4ea6-acfe-82f25010549e-serving-cert podName:123ad867-ebe9-4ea6-acfe-82f25010549e nodeName:}" failed. No retries permitted until 2026-02-17 08:58:13.660958326 +0000 UTC m=+1041.321719549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/123ad867-ebe9-4ea6-acfe-82f25010549e-serving-cert") pod "observability-ui-dashboards-66cbf594b5-gh8k5" (UID: "123ad867-ebe9-4ea6-acfe-82f25010549e") : secret "observability-ui-dashboards" not found Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.217017 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6fpm\" (UniqueName: \"kubernetes.io/projected/123ad867-ebe9-4ea6-acfe-82f25010549e-kube-api-access-h6fpm\") pod \"observability-ui-dashboards-66cbf594b5-gh8k5\" (UID: \"123ad867-ebe9-4ea6-acfe-82f25010549e\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gh8k5" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.251420 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"a1969f4f-4612-43d8-bd57-6be840c9d815","Type":"ContainerStarted","Data":"f7e7bb6f58298d86802c1728d414a800dca19892edef017a9ccc20bcb3c67d99"} Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.252473 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.263194 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"afb01180-51b9-47b4-8f48-8ec2ce45c286","Type":"ContainerStarted","Data":"7a785b158823d46aa76cde6f53a8482a706da384542919397904c612656577ad"} Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.263365 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.270236 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.270564 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.270679 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.271010 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-1" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.271190 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-2" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.271382 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.271554 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.271669 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-zrcnr" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.288723 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.363712 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-config\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.363778 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.363813 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.363840 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.363881 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b445db98-91a3-4ca2-9de9-96824d2ba6d2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.363913 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.363934 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b445db98-91a3-4ca2-9de9-96824d2ba6d2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.364190 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.364220 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.364240 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hf5g\" (UniqueName: \"kubernetes.io/projected/b445db98-91a3-4ca2-9de9-96824d2ba6d2-kube-api-access-9hf5g\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.384907 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-66c99764bb-74vbt"] Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.386613 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.402091 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66c99764bb-74vbt"] Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476112 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d43d2b0-9e61-4f09-a55f-38a2055fb902-console-oauth-config\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476162 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476185 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d43d2b0-9e61-4f09-a55f-38a2055fb902-oauth-serving-cert\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476220 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d43d2b0-9e61-4f09-a55f-38a2055fb902-console-serving-cert\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476247 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b445db98-91a3-4ca2-9de9-96824d2ba6d2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476278 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476301 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b445db98-91a3-4ca2-9de9-96824d2ba6d2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476334 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476356 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476378 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hf5g\" (UniqueName: \"kubernetes.io/projected/b445db98-91a3-4ca2-9de9-96824d2ba6d2-kube-api-access-9hf5g\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476409 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d43d2b0-9e61-4f09-a55f-38a2055fb902-console-config\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476440 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d43d2b0-9e61-4f09-a55f-38a2055fb902-trusted-ca-bundle\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476461 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d43d2b0-9e61-4f09-a55f-38a2055fb902-service-ca\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476482 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-config\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476507 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476527 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdmrk\" (UniqueName: \"kubernetes.io/projected/0d43d2b0-9e61-4f09-a55f-38a2055fb902-kube-api-access-vdmrk\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.476551 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.477253 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.480179 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.486632 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.492545 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b445db98-91a3-4ca2-9de9-96824d2ba6d2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.492749 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b445db98-91a3-4ca2-9de9-96824d2ba6d2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.496741 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.501074 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.503271 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-config\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.512358 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hf5g\" (UniqueName: \"kubernetes.io/projected/b445db98-91a3-4ca2-9de9-96824d2ba6d2-kube-api-access-9hf5g\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.530381 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.530423 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1e200dfcdbaa5702b94bbdc90ab350504c0d12de0768779fa6f433eb34cbbe0c/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.581730 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d43d2b0-9e61-4f09-a55f-38a2055fb902-console-config\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.581783 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d43d2b0-9e61-4f09-a55f-38a2055fb902-trusted-ca-bundle\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.581805 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d43d2b0-9e61-4f09-a55f-38a2055fb902-service-ca\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.581836 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdmrk\" (UniqueName: \"kubernetes.io/projected/0d43d2b0-9e61-4f09-a55f-38a2055fb902-kube-api-access-vdmrk\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.581868 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d43d2b0-9e61-4f09-a55f-38a2055fb902-console-oauth-config\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.581884 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d43d2b0-9e61-4f09-a55f-38a2055fb902-oauth-serving-cert\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.581915 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d43d2b0-9e61-4f09-a55f-38a2055fb902-console-serving-cert\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.586142 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d43d2b0-9e61-4f09-a55f-38a2055fb902-service-ca\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.586679 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d43d2b0-9e61-4f09-a55f-38a2055fb902-console-config\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.587344 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d43d2b0-9e61-4f09-a55f-38a2055fb902-trusted-ca-bundle\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.587920 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d43d2b0-9e61-4f09-a55f-38a2055fb902-oauth-serving-cert\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.604216 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d43d2b0-9e61-4f09-a55f-38a2055fb902-console-serving-cert\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.610076 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d43d2b0-9e61-4f09-a55f-38a2055fb902-console-oauth-config\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.611437 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdmrk\" (UniqueName: \"kubernetes.io/projected/0d43d2b0-9e61-4f09-a55f-38a2055fb902-kube-api-access-vdmrk\") pod \"console-66c99764bb-74vbt\" (UID: \"0d43d2b0-9e61-4f09-a55f-38a2055fb902\") " pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.621246 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.682922 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/123ad867-ebe9-4ea6-acfe-82f25010549e-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-gh8k5\" (UID: \"123ad867-ebe9-4ea6-acfe-82f25010549e\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gh8k5" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.685951 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/123ad867-ebe9-4ea6-acfe-82f25010549e-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-gh8k5\" (UID: \"123ad867-ebe9-4ea6-acfe-82f25010549e\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gh8k5" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.720290 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\") pod \"prometheus-metric-storage-0\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.726991 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.889786 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:13 crc kubenswrapper[4813]: I0217 08:58:13.899150 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gh8k5" Feb 17 08:58:13 crc kubenswrapper[4813]: W0217 08:58:13.927829 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fbc11e5_9828_4421_bd30_bbf16df6be8f.slice/crio-fa1a96c6c64e2aec29323345022b986c175224bac6c0c06a8eda740c9b598ed6 WatchSource:0}: Error finding container fa1a96c6c64e2aec29323345022b986c175224bac6c0c06a8eda740c9b598ed6: Status 404 returned error can't find the container with id fa1a96c6c64e2aec29323345022b986c175224bac6c0c06a8eda740c9b598ed6 Feb 17 08:58:14 crc kubenswrapper[4813]: I0217 08:58:14.279342 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"1fbc11e5-9828-4421-bd30-bbf16df6be8f","Type":"ContainerStarted","Data":"fa1a96c6c64e2aec29323345022b986c175224bac6c0c06a8eda740c9b598ed6"} Feb 17 08:58:14 crc kubenswrapper[4813]: I0217 08:58:14.647142 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66c99764bb-74vbt"] Feb 17 08:58:14 crc kubenswrapper[4813]: I0217 08:58:14.708883 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-gh8k5"] Feb 17 08:58:14 crc kubenswrapper[4813]: W0217 08:58:14.871226 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d43d2b0_9e61_4f09_a55f_38a2055fb902.slice/crio-ec4a592a4f1c958bc7d902f5061a595f3fb7a9beed44fa3a10f5dab09fbe1c5c WatchSource:0}: Error finding container ec4a592a4f1c958bc7d902f5061a595f3fb7a9beed44fa3a10f5dab09fbe1c5c: Status 404 returned error can't find the container with id ec4a592a4f1c958bc7d902f5061a595f3fb7a9beed44fa3a10f5dab09fbe1c5c Feb 17 08:58:14 crc kubenswrapper[4813]: W0217 08:58:14.871539 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod123ad867_ebe9_4ea6_acfe_82f25010549e.slice/crio-29ae21a2e1ff2f8d64e5a3e0af569bc26843521ae8c6f1d944e4274c92b2cb9f WatchSource:0}: Error finding container 29ae21a2e1ff2f8d64e5a3e0af569bc26843521ae8c6f1d944e4274c92b2cb9f: Status 404 returned error can't find the container with id 29ae21a2e1ff2f8d64e5a3e0af569bc26843521ae8c6f1d944e4274c92b2cb9f Feb 17 08:58:15 crc kubenswrapper[4813]: I0217 08:58:15.288037 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66c99764bb-74vbt" event={"ID":"0d43d2b0-9e61-4f09-a55f-38a2055fb902","Type":"ContainerStarted","Data":"ec4a592a4f1c958bc7d902f5061a595f3fb7a9beed44fa3a10f5dab09fbe1c5c"} Feb 17 08:58:15 crc kubenswrapper[4813]: I0217 08:58:15.289546 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gh8k5" event={"ID":"123ad867-ebe9-4ea6-acfe-82f25010549e","Type":"ContainerStarted","Data":"29ae21a2e1ff2f8d64e5a3e0af569bc26843521ae8c6f1d944e4274c92b2cb9f"} Feb 17 08:58:15 crc kubenswrapper[4813]: I0217 08:58:15.367646 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 17 08:58:16 crc kubenswrapper[4813]: I0217 08:58:16.297282 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"b445db98-91a3-4ca2-9de9-96824d2ba6d2","Type":"ContainerStarted","Data":"572700edead13c3b673f2ad17c46d2f5eefcefe3deb8e916b0e9e5790e906c3f"} Feb 17 08:58:24 crc kubenswrapper[4813]: I0217 08:58:24.381092 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66c99764bb-74vbt" event={"ID":"0d43d2b0-9e61-4f09-a55f-38a2055fb902","Type":"ContainerStarted","Data":"5363013d8d4a8656249d0d6abf7033b476ebc7b7fa784ab316d4d4bf8341340f"} Feb 17 08:58:24 crc kubenswrapper[4813]: I0217 08:58:24.401538 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-66c99764bb-74vbt" podStartSLOduration=11.401525709 podStartE2EDuration="11.401525709s" podCreationTimestamp="2026-02-17 08:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:58:24.398623376 +0000 UTC m=+1052.059384599" watchObservedRunningTime="2026-02-17 08:58:24.401525709 +0000 UTC m=+1052.062286932" Feb 17 08:58:25 crc kubenswrapper[4813]: I0217 08:58:25.391346 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gh8k5" event={"ID":"123ad867-ebe9-4ea6-acfe-82f25010549e","Type":"ContainerStarted","Data":"301dbe3bd9c86e6c402af52f54453f9da2858daccaeab24948a7d187f11c6802"} Feb 17 08:58:25 crc kubenswrapper[4813]: I0217 08:58:25.393774 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"e3a68bb5-cac8-4f13-892a-272304837676","Type":"ContainerStarted","Data":"88a15d470f34d9c7c0aecc3c7a0acb659da95935e430473c6effd65f526ebbd5"} Feb 17 08:58:25 crc kubenswrapper[4813]: I0217 08:58:25.396112 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"a1969f4f-4612-43d8-bd57-6be840c9d815","Type":"ContainerStarted","Data":"34e4bbc6c62eb618eb95c3966aa0a057ce4ca1b60b558d976e59b34e43cd8860"} Feb 17 08:58:25 crc kubenswrapper[4813]: I0217 08:58:25.396289 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 08:58:25 crc kubenswrapper[4813]: I0217 08:58:25.398278 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"afb01180-51b9-47b4-8f48-8ec2ce45c286","Type":"ContainerStarted","Data":"c82d31bb4c6e5a41c563d24bbb81ef69b0e2ba9a0bd550f036c93b34422093db"} Feb 17 08:58:25 crc kubenswrapper[4813]: I0217 08:58:25.398624 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:25 crc kubenswrapper[4813]: I0217 08:58:25.412633 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gh8k5" podStartSLOduration=4.054408897 podStartE2EDuration="13.412612779s" podCreationTimestamp="2026-02-17 08:58:12 +0000 UTC" firstStartedPulling="2026-02-17 08:58:14.901635014 +0000 UTC m=+1042.562396227" lastFinishedPulling="2026-02-17 08:58:24.259838876 +0000 UTC m=+1051.920600109" observedRunningTime="2026-02-17 08:58:25.408227574 +0000 UTC m=+1053.068988797" watchObservedRunningTime="2026-02-17 08:58:25.412612779 +0000 UTC m=+1053.073374002" Feb 17 08:58:25 crc kubenswrapper[4813]: I0217 08:58:25.473563 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=2.767800851 podStartE2EDuration="14.473542627s" podCreationTimestamp="2026-02-17 08:58:11 +0000 UTC" firstStartedPulling="2026-02-17 08:58:12.554414159 +0000 UTC m=+1040.215175382" lastFinishedPulling="2026-02-17 08:58:24.260155935 +0000 UTC m=+1051.920917158" observedRunningTime="2026-02-17 08:58:25.466130146 +0000 UTC m=+1053.126891379" watchObservedRunningTime="2026-02-17 08:58:25.473542627 +0000 UTC m=+1053.134303850" Feb 17 08:58:25 crc kubenswrapper[4813]: I0217 08:58:25.517628 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=3.1604441149999998 podStartE2EDuration="14.517609245s" podCreationTimestamp="2026-02-17 08:58:11 +0000 UTC" firstStartedPulling="2026-02-17 08:58:12.866532895 +0000 UTC m=+1040.527294118" lastFinishedPulling="2026-02-17 08:58:24.223698015 +0000 UTC m=+1051.884459248" observedRunningTime="2026-02-17 08:58:25.514763854 +0000 UTC m=+1053.175525077" watchObservedRunningTime="2026-02-17 08:58:25.517609245 +0000 UTC m=+1053.178370468" Feb 17 08:58:26 crc kubenswrapper[4813]: I0217 08:58:26.406470 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"a9a8ae31-c620-49c7-9752-6b045300e16a","Type":"ContainerStarted","Data":"68f0c234de9c1deb2659022713ee51a7bba1a18cf55c0b5894d0d5890791a294"} Feb 17 08:58:26 crc kubenswrapper[4813]: I0217 08:58:26.408378 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"67e1e624-5191-497b-9c9a-ff9d281c0871","Type":"ContainerStarted","Data":"e57574faf928d64cad51fb015b1d908c353ab411f97fae9219175a46a8e2abd4"} Feb 17 08:58:27 crc kubenswrapper[4813]: I0217 08:58:27.416086 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"1fbc11e5-9828-4421-bd30-bbf16df6be8f","Type":"ContainerStarted","Data":"d1dacd9c1148999ac5b20c9372791c5de505aceeb1349a02965c06c4bca0ab0d"} Feb 17 08:58:27 crc kubenswrapper[4813]: I0217 08:58:27.418904 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"b445db98-91a3-4ca2-9de9-96824d2ba6d2","Type":"ContainerStarted","Data":"1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3"} Feb 17 08:58:29 crc kubenswrapper[4813]: I0217 08:58:29.436784 4813 generic.go:334] "Generic (PLEG): container finished" podID="e3a68bb5-cac8-4f13-892a-272304837676" containerID="88a15d470f34d9c7c0aecc3c7a0acb659da95935e430473c6effd65f526ebbd5" exitCode=0 Feb 17 08:58:29 crc kubenswrapper[4813]: I0217 08:58:29.436932 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"e3a68bb5-cac8-4f13-892a-272304837676","Type":"ContainerDied","Data":"88a15d470f34d9c7c0aecc3c7a0acb659da95935e430473c6effd65f526ebbd5"} Feb 17 08:58:30 crc kubenswrapper[4813]: I0217 08:58:30.449934 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"e3a68bb5-cac8-4f13-892a-272304837676","Type":"ContainerStarted","Data":"a80145d43b10f9a215772009cab7e444b71916dd62f7815f65c37d57fa0d7f18"} Feb 17 08:58:30 crc kubenswrapper[4813]: I0217 08:58:30.472435 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstack-galera-0" podStartSLOduration=8.358856361 podStartE2EDuration="20.472408432s" podCreationTimestamp="2026-02-17 08:58:10 +0000 UTC" firstStartedPulling="2026-02-17 08:58:12.13421906 +0000 UTC m=+1039.794980283" lastFinishedPulling="2026-02-17 08:58:24.247771121 +0000 UTC m=+1051.908532354" observedRunningTime="2026-02-17 08:58:30.470104586 +0000 UTC m=+1058.130865799" watchObservedRunningTime="2026-02-17 08:58:30.472408432 +0000 UTC m=+1058.133169685" Feb 17 08:58:31 crc kubenswrapper[4813]: I0217 08:58:31.521802 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:31 crc kubenswrapper[4813]: I0217 08:58:31.521855 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:31 crc kubenswrapper[4813]: I0217 08:58:31.931725 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Feb 17 08:58:32 crc kubenswrapper[4813]: I0217 08:58:32.343561 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 08:58:33 crc kubenswrapper[4813]: I0217 08:58:33.728368 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:33 crc kubenswrapper[4813]: I0217 08:58:33.728415 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:33 crc kubenswrapper[4813]: I0217 08:58:33.735473 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:34 crc kubenswrapper[4813]: I0217 08:58:34.485009 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-66c99764bb-74vbt" Feb 17 08:58:34 crc kubenswrapper[4813]: I0217 08:58:34.563552 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-778567d7df-csf7r"] Feb 17 08:58:35 crc kubenswrapper[4813]: I0217 08:58:35.491271 4813 generic.go:334] "Generic (PLEG): container finished" podID="1fbc11e5-9828-4421-bd30-bbf16df6be8f" containerID="d1dacd9c1148999ac5b20c9372791c5de505aceeb1349a02965c06c4bca0ab0d" exitCode=0 Feb 17 08:58:35 crc kubenswrapper[4813]: I0217 08:58:35.491370 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"1fbc11e5-9828-4421-bd30-bbf16df6be8f","Type":"ContainerDied","Data":"d1dacd9c1148999ac5b20c9372791c5de505aceeb1349a02965c06c4bca0ab0d"} Feb 17 08:58:35 crc kubenswrapper[4813]: I0217 08:58:35.494162 4813 generic.go:334] "Generic (PLEG): container finished" podID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerID="1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3" exitCode=0 Feb 17 08:58:35 crc kubenswrapper[4813]: I0217 08:58:35.494242 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"b445db98-91a3-4ca2-9de9-96824d2ba6d2","Type":"ContainerDied","Data":"1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3"} Feb 17 08:58:35 crc kubenswrapper[4813]: I0217 08:58:35.681662 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:35 crc kubenswrapper[4813]: I0217 08:58:35.772455 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/openstack-galera-0" Feb 17 08:58:38 crc kubenswrapper[4813]: I0217 08:58:38.522236 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"1fbc11e5-9828-4421-bd30-bbf16df6be8f","Type":"ContainerStarted","Data":"54a04a0044cb7c3630266b252a5b5150a7fdf255d4fe52e3338bf8e60910155a"} Feb 17 08:58:40 crc kubenswrapper[4813]: I0217 08:58:40.278919 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/root-account-create-update-djd62"] Feb 17 08:58:40 crc kubenswrapper[4813]: I0217 08:58:40.280409 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-djd62" Feb 17 08:58:40 crc kubenswrapper[4813]: I0217 08:58:40.282594 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-mariadb-root-db-secret" Feb 17 08:58:40 crc kubenswrapper[4813]: I0217 08:58:40.304425 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/root-account-create-update-djd62"] Feb 17 08:58:40 crc kubenswrapper[4813]: I0217 08:58:40.305897 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57131d8c-9d5e-492b-9141-43913e805dd1-operator-scripts\") pod \"root-account-create-update-djd62\" (UID: \"57131d8c-9d5e-492b-9141-43913e805dd1\") " pod="watcher-kuttl-default/root-account-create-update-djd62" Feb 17 08:58:40 crc kubenswrapper[4813]: I0217 08:58:40.306014 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxv7k\" (UniqueName: \"kubernetes.io/projected/57131d8c-9d5e-492b-9141-43913e805dd1-kube-api-access-cxv7k\") pod \"root-account-create-update-djd62\" (UID: \"57131d8c-9d5e-492b-9141-43913e805dd1\") " pod="watcher-kuttl-default/root-account-create-update-djd62" Feb 17 08:58:40 crc kubenswrapper[4813]: I0217 08:58:40.406961 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxv7k\" (UniqueName: \"kubernetes.io/projected/57131d8c-9d5e-492b-9141-43913e805dd1-kube-api-access-cxv7k\") pod \"root-account-create-update-djd62\" (UID: \"57131d8c-9d5e-492b-9141-43913e805dd1\") " pod="watcher-kuttl-default/root-account-create-update-djd62" Feb 17 08:58:40 crc kubenswrapper[4813]: I0217 08:58:40.407084 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57131d8c-9d5e-492b-9141-43913e805dd1-operator-scripts\") pod \"root-account-create-update-djd62\" (UID: \"57131d8c-9d5e-492b-9141-43913e805dd1\") " pod="watcher-kuttl-default/root-account-create-update-djd62" Feb 17 08:58:40 crc kubenswrapper[4813]: I0217 08:58:40.407802 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57131d8c-9d5e-492b-9141-43913e805dd1-operator-scripts\") pod \"root-account-create-update-djd62\" (UID: \"57131d8c-9d5e-492b-9141-43913e805dd1\") " pod="watcher-kuttl-default/root-account-create-update-djd62" Feb 17 08:58:40 crc kubenswrapper[4813]: I0217 08:58:40.479644 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxv7k\" (UniqueName: \"kubernetes.io/projected/57131d8c-9d5e-492b-9141-43913e805dd1-kube-api-access-cxv7k\") pod \"root-account-create-update-djd62\" (UID: \"57131d8c-9d5e-492b-9141-43913e805dd1\") " pod="watcher-kuttl-default/root-account-create-update-djd62" Feb 17 08:58:40 crc kubenswrapper[4813]: I0217 08:58:40.596999 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-djd62" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.519629 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-create-std5j"] Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.521131 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-std5j" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.528794 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-std5j"] Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.554906 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"1fbc11e5-9828-4421-bd30-bbf16df6be8f","Type":"ContainerStarted","Data":"376ca9d3472b8043ba07718c62be32cad0f6f37dda74673f4902989d8de1bbb6"} Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.555274 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.558189 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.576404 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/alertmanager-metric-storage-0" podStartSLOduration=5.435703549 podStartE2EDuration="29.576389486s" podCreationTimestamp="2026-02-17 08:58:12 +0000 UTC" firstStartedPulling="2026-02-17 08:58:13.949152036 +0000 UTC m=+1041.609913259" lastFinishedPulling="2026-02-17 08:58:38.089837973 +0000 UTC m=+1065.750599196" observedRunningTime="2026-02-17 08:58:41.572781644 +0000 UTC m=+1069.233542867" watchObservedRunningTime="2026-02-17 08:58:41.576389486 +0000 UTC m=+1069.237150709" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.625752 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5007a7b4-94ab-4c00-ba51-bf73132dfbfa-operator-scripts\") pod \"keystone-db-create-std5j\" (UID: \"5007a7b4-94ab-4c00-ba51-bf73132dfbfa\") " pod="watcher-kuttl-default/keystone-db-create-std5j" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.625832 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkngz\" (UniqueName: \"kubernetes.io/projected/5007a7b4-94ab-4c00-ba51-bf73132dfbfa-kube-api-access-zkngz\") pod \"keystone-db-create-std5j\" (UID: \"5007a7b4-94ab-4c00-ba51-bf73132dfbfa\") " pod="watcher-kuttl-default/keystone-db-create-std5j" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.727062 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5007a7b4-94ab-4c00-ba51-bf73132dfbfa-operator-scripts\") pod \"keystone-db-create-std5j\" (UID: \"5007a7b4-94ab-4c00-ba51-bf73132dfbfa\") " pod="watcher-kuttl-default/keystone-db-create-std5j" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.727110 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-d726-account-create-update-8bbvg"] Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.727488 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkngz\" (UniqueName: \"kubernetes.io/projected/5007a7b4-94ab-4c00-ba51-bf73132dfbfa-kube-api-access-zkngz\") pod \"keystone-db-create-std5j\" (UID: \"5007a7b4-94ab-4c00-ba51-bf73132dfbfa\") " pod="watcher-kuttl-default/keystone-db-create-std5j" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.727759 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5007a7b4-94ab-4c00-ba51-bf73132dfbfa-operator-scripts\") pod \"keystone-db-create-std5j\" (UID: \"5007a7b4-94ab-4c00-ba51-bf73132dfbfa\") " pod="watcher-kuttl-default/keystone-db-create-std5j" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.729486 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-d726-account-create-update-8bbvg" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.732593 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-db-secret" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.736102 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-d726-account-create-update-8bbvg"] Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.774094 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkngz\" (UniqueName: \"kubernetes.io/projected/5007a7b4-94ab-4c00-ba51-bf73132dfbfa-kube-api-access-zkngz\") pod \"keystone-db-create-std5j\" (UID: \"5007a7b4-94ab-4c00-ba51-bf73132dfbfa\") " pod="watcher-kuttl-default/keystone-db-create-std5j" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.829145 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzw64\" (UniqueName: \"kubernetes.io/projected/dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6-kube-api-access-xzw64\") pod \"keystone-d726-account-create-update-8bbvg\" (UID: \"dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6\") " pod="watcher-kuttl-default/keystone-d726-account-create-update-8bbvg" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.829224 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6-operator-scripts\") pod \"keystone-d726-account-create-update-8bbvg\" (UID: \"dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6\") " pod="watcher-kuttl-default/keystone-d726-account-create-update-8bbvg" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.843985 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-std5j" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.931191 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzw64\" (UniqueName: \"kubernetes.io/projected/dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6-kube-api-access-xzw64\") pod \"keystone-d726-account-create-update-8bbvg\" (UID: \"dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6\") " pod="watcher-kuttl-default/keystone-d726-account-create-update-8bbvg" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.931263 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6-operator-scripts\") pod \"keystone-d726-account-create-update-8bbvg\" (UID: \"dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6\") " pod="watcher-kuttl-default/keystone-d726-account-create-update-8bbvg" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.932133 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6-operator-scripts\") pod \"keystone-d726-account-create-update-8bbvg\" (UID: \"dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6\") " pod="watcher-kuttl-default/keystone-d726-account-create-update-8bbvg" Feb 17 08:58:41 crc kubenswrapper[4813]: I0217 08:58:41.947801 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzw64\" (UniqueName: \"kubernetes.io/projected/dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6-kube-api-access-xzw64\") pod \"keystone-d726-account-create-update-8bbvg\" (UID: \"dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6\") " pod="watcher-kuttl-default/keystone-d726-account-create-update-8bbvg" Feb 17 08:58:42 crc kubenswrapper[4813]: I0217 08:58:42.042829 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-d726-account-create-update-8bbvg" Feb 17 08:58:42 crc kubenswrapper[4813]: I0217 08:58:42.542426 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-std5j"] Feb 17 08:58:42 crc kubenswrapper[4813]: I0217 08:58:42.564476 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"b445db98-91a3-4ca2-9de9-96824d2ba6d2","Type":"ContainerStarted","Data":"b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81"} Feb 17 08:58:42 crc kubenswrapper[4813]: I0217 08:58:42.565643 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-std5j" event={"ID":"5007a7b4-94ab-4c00-ba51-bf73132dfbfa","Type":"ContainerStarted","Data":"428e4aa06afa3f349e42747e96f8df0910c6c22d06e647a5876e6d61fca0e63e"} Feb 17 08:58:42 crc kubenswrapper[4813]: I0217 08:58:42.622417 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/root-account-create-update-djd62"] Feb 17 08:58:42 crc kubenswrapper[4813]: W0217 08:58:42.633454 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57131d8c_9d5e_492b_9141_43913e805dd1.slice/crio-7c2ac36f622f8afb7b670715ea9f5e75bb4d13d2949990e55bcde4b054cbaf6d WatchSource:0}: Error finding container 7c2ac36f622f8afb7b670715ea9f5e75bb4d13d2949990e55bcde4b054cbaf6d: Status 404 returned error can't find the container with id 7c2ac36f622f8afb7b670715ea9f5e75bb4d13d2949990e55bcde4b054cbaf6d Feb 17 08:58:42 crc kubenswrapper[4813]: I0217 08:58:42.707778 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-d726-account-create-update-8bbvg"] Feb 17 08:58:42 crc kubenswrapper[4813]: W0217 08:58:42.720857 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfdebfb5_6d89_427e_8f3f_2d41f48fa1c6.slice/crio-24ea13d2a1a8e18f306f301e2517bf07b37676b0f97ba4fb661d47ba554b4db5 WatchSource:0}: Error finding container 24ea13d2a1a8e18f306f301e2517bf07b37676b0f97ba4fb661d47ba554b4db5: Status 404 returned error can't find the container with id 24ea13d2a1a8e18f306f301e2517bf07b37676b0f97ba4fb661d47ba554b4db5 Feb 17 08:58:43 crc kubenswrapper[4813]: I0217 08:58:43.575662 4813 generic.go:334] "Generic (PLEG): container finished" podID="57131d8c-9d5e-492b-9141-43913e805dd1" containerID="95689a9723cdb14ac451e4792ed0a05199f19d2734acf96aa3ece8edb9b27d60" exitCode=0 Feb 17 08:58:43 crc kubenswrapper[4813]: I0217 08:58:43.575725 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-djd62" event={"ID":"57131d8c-9d5e-492b-9141-43913e805dd1","Type":"ContainerDied","Data":"95689a9723cdb14ac451e4792ed0a05199f19d2734acf96aa3ece8edb9b27d60"} Feb 17 08:58:43 crc kubenswrapper[4813]: I0217 08:58:43.575803 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-djd62" event={"ID":"57131d8c-9d5e-492b-9141-43913e805dd1","Type":"ContainerStarted","Data":"7c2ac36f622f8afb7b670715ea9f5e75bb4d13d2949990e55bcde4b054cbaf6d"} Feb 17 08:58:43 crc kubenswrapper[4813]: I0217 08:58:43.578577 4813 generic.go:334] "Generic (PLEG): container finished" podID="dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6" containerID="4d8c294646491d315b2686d7b0a80b424d3a771a7e95971da346bb1287c9132c" exitCode=0 Feb 17 08:58:43 crc kubenswrapper[4813]: I0217 08:58:43.578652 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-d726-account-create-update-8bbvg" event={"ID":"dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6","Type":"ContainerDied","Data":"4d8c294646491d315b2686d7b0a80b424d3a771a7e95971da346bb1287c9132c"} Feb 17 08:58:43 crc kubenswrapper[4813]: I0217 08:58:43.578730 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-d726-account-create-update-8bbvg" event={"ID":"dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6","Type":"ContainerStarted","Data":"24ea13d2a1a8e18f306f301e2517bf07b37676b0f97ba4fb661d47ba554b4db5"} Feb 17 08:58:43 crc kubenswrapper[4813]: I0217 08:58:43.580873 4813 generic.go:334] "Generic (PLEG): container finished" podID="5007a7b4-94ab-4c00-ba51-bf73132dfbfa" containerID="330f22faf9a8b981cb08ada4828a3f84bc38b8ff49dbbd7c8779600d60ba0079" exitCode=0 Feb 17 08:58:43 crc kubenswrapper[4813]: I0217 08:58:43.580982 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-std5j" event={"ID":"5007a7b4-94ab-4c00-ba51-bf73132dfbfa","Type":"ContainerDied","Data":"330f22faf9a8b981cb08ada4828a3f84bc38b8ff49dbbd7c8779600d60ba0079"} Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.042251 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-djd62" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.047858 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-d726-account-create-update-8bbvg" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.055105 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-std5j" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.192827 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzw64\" (UniqueName: \"kubernetes.io/projected/dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6-kube-api-access-xzw64\") pod \"dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6\" (UID: \"dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6\") " Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.192903 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5007a7b4-94ab-4c00-ba51-bf73132dfbfa-operator-scripts\") pod \"5007a7b4-94ab-4c00-ba51-bf73132dfbfa\" (UID: \"5007a7b4-94ab-4c00-ba51-bf73132dfbfa\") " Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.192948 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxv7k\" (UniqueName: \"kubernetes.io/projected/57131d8c-9d5e-492b-9141-43913e805dd1-kube-api-access-cxv7k\") pod \"57131d8c-9d5e-492b-9141-43913e805dd1\" (UID: \"57131d8c-9d5e-492b-9141-43913e805dd1\") " Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.192988 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57131d8c-9d5e-492b-9141-43913e805dd1-operator-scripts\") pod \"57131d8c-9d5e-492b-9141-43913e805dd1\" (UID: \"57131d8c-9d5e-492b-9141-43913e805dd1\") " Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.193017 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkngz\" (UniqueName: \"kubernetes.io/projected/5007a7b4-94ab-4c00-ba51-bf73132dfbfa-kube-api-access-zkngz\") pod \"5007a7b4-94ab-4c00-ba51-bf73132dfbfa\" (UID: \"5007a7b4-94ab-4c00-ba51-bf73132dfbfa\") " Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.193040 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6-operator-scripts\") pod \"dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6\" (UID: \"dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6\") " Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.193746 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6" (UID: "dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.194079 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57131d8c-9d5e-492b-9141-43913e805dd1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57131d8c-9d5e-492b-9141-43913e805dd1" (UID: "57131d8c-9d5e-492b-9141-43913e805dd1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.195207 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5007a7b4-94ab-4c00-ba51-bf73132dfbfa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5007a7b4-94ab-4c00-ba51-bf73132dfbfa" (UID: "5007a7b4-94ab-4c00-ba51-bf73132dfbfa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.198501 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57131d8c-9d5e-492b-9141-43913e805dd1-kube-api-access-cxv7k" (OuterVolumeSpecName: "kube-api-access-cxv7k") pod "57131d8c-9d5e-492b-9141-43913e805dd1" (UID: "57131d8c-9d5e-492b-9141-43913e805dd1"). InnerVolumeSpecName "kube-api-access-cxv7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.198979 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5007a7b4-94ab-4c00-ba51-bf73132dfbfa-kube-api-access-zkngz" (OuterVolumeSpecName: "kube-api-access-zkngz") pod "5007a7b4-94ab-4c00-ba51-bf73132dfbfa" (UID: "5007a7b4-94ab-4c00-ba51-bf73132dfbfa"). InnerVolumeSpecName "kube-api-access-zkngz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.206487 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6-kube-api-access-xzw64" (OuterVolumeSpecName: "kube-api-access-xzw64") pod "dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6" (UID: "dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6"). InnerVolumeSpecName "kube-api-access-xzw64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.295859 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzw64\" (UniqueName: \"kubernetes.io/projected/dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6-kube-api-access-xzw64\") on node \"crc\" DevicePath \"\"" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.295916 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5007a7b4-94ab-4c00-ba51-bf73132dfbfa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.295937 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxv7k\" (UniqueName: \"kubernetes.io/projected/57131d8c-9d5e-492b-9141-43913e805dd1-kube-api-access-cxv7k\") on node \"crc\" DevicePath \"\"" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.295957 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57131d8c-9d5e-492b-9141-43913e805dd1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.295978 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkngz\" (UniqueName: \"kubernetes.io/projected/5007a7b4-94ab-4c00-ba51-bf73132dfbfa-kube-api-access-zkngz\") on node \"crc\" DevicePath \"\"" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.295998 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.601405 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-djd62" event={"ID":"57131d8c-9d5e-492b-9141-43913e805dd1","Type":"ContainerDied","Data":"7c2ac36f622f8afb7b670715ea9f5e75bb4d13d2949990e55bcde4b054cbaf6d"} Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.601466 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c2ac36f622f8afb7b670715ea9f5e75bb4d13d2949990e55bcde4b054cbaf6d" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.601545 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-djd62" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.610603 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-d726-account-create-update-8bbvg" event={"ID":"dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6","Type":"ContainerDied","Data":"24ea13d2a1a8e18f306f301e2517bf07b37676b0f97ba4fb661d47ba554b4db5"} Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.611005 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24ea13d2a1a8e18f306f301e2517bf07b37676b0f97ba4fb661d47ba554b4db5" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.610634 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-d726-account-create-update-8bbvg" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.612827 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-std5j" event={"ID":"5007a7b4-94ab-4c00-ba51-bf73132dfbfa","Type":"ContainerDied","Data":"428e4aa06afa3f349e42747e96f8df0910c6c22d06e647a5876e6d61fca0e63e"} Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.612858 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="428e4aa06afa3f349e42747e96f8df0910c6c22d06e647a5876e6d61fca0e63e" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.612906 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-std5j" Feb 17 08:58:45 crc kubenswrapper[4813]: I0217 08:58:45.619401 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"b445db98-91a3-4ca2-9de9-96824d2ba6d2","Type":"ContainerStarted","Data":"ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10"} Feb 17 08:58:48 crc kubenswrapper[4813]: I0217 08:58:48.660158 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"b445db98-91a3-4ca2-9de9-96824d2ba6d2","Type":"ContainerStarted","Data":"cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa"} Feb 17 08:58:48 crc kubenswrapper[4813]: I0217 08:58:48.705484 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=4.229272466 podStartE2EDuration="36.705452824s" podCreationTimestamp="2026-02-17 08:58:12 +0000 UTC" firstStartedPulling="2026-02-17 08:58:15.57650442 +0000 UTC m=+1043.237265643" lastFinishedPulling="2026-02-17 08:58:48.052684778 +0000 UTC m=+1075.713446001" observedRunningTime="2026-02-17 08:58:48.693369809 +0000 UTC m=+1076.354131052" watchObservedRunningTime="2026-02-17 08:58:48.705452824 +0000 UTC m=+1076.366214087" Feb 17 08:58:48 crc kubenswrapper[4813]: I0217 08:58:48.890209 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:58 crc kubenswrapper[4813]: I0217 08:58:58.766765 4813 generic.go:334] "Generic (PLEG): container finished" podID="a9a8ae31-c620-49c7-9752-6b045300e16a" containerID="68f0c234de9c1deb2659022713ee51a7bba1a18cf55c0b5894d0d5890791a294" exitCode=0 Feb 17 08:58:58 crc kubenswrapper[4813]: I0217 08:58:58.766859 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"a9a8ae31-c620-49c7-9752-6b045300e16a","Type":"ContainerDied","Data":"68f0c234de9c1deb2659022713ee51a7bba1a18cf55c0b5894d0d5890791a294"} Feb 17 08:58:58 crc kubenswrapper[4813]: I0217 08:58:58.770658 4813 generic.go:334] "Generic (PLEG): container finished" podID="67e1e624-5191-497b-9c9a-ff9d281c0871" containerID="e57574faf928d64cad51fb015b1d908c353ab411f97fae9219175a46a8e2abd4" exitCode=0 Feb 17 08:58:58 crc kubenswrapper[4813]: I0217 08:58:58.770739 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"67e1e624-5191-497b-9c9a-ff9d281c0871","Type":"ContainerDied","Data":"e57574faf928d64cad51fb015b1d908c353ab411f97fae9219175a46a8e2abd4"} Feb 17 08:58:58 crc kubenswrapper[4813]: I0217 08:58:58.890649 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:58 crc kubenswrapper[4813]: I0217 08:58:58.893150 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:59 crc kubenswrapper[4813]: I0217 08:58:59.631998 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-778567d7df-csf7r" podUID="711327fc-bdf0-4251-a90d-968ec048caa5" containerName="console" containerID="cri-o://a092ff3bb949aec18104bcad58bd73bc06e5e6a0c4cd915b5399af53f74151f5" gracePeriod=15 Feb 17 08:58:59 crc kubenswrapper[4813]: I0217 08:58:59.784132 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"67e1e624-5191-497b-9c9a-ff9d281c0871","Type":"ContainerStarted","Data":"89ef582f4191bfd2443b0ca41d4663a2b4e707d1fee4ecd24108798e1402a210"} Feb 17 08:58:59 crc kubenswrapper[4813]: I0217 08:58:59.784371 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:58:59 crc kubenswrapper[4813]: I0217 08:58:59.785873 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"a9a8ae31-c620-49c7-9752-6b045300e16a","Type":"ContainerStarted","Data":"d637dfefa7e9b186356d620a52fa572aadca503c935396f87e6081d844d2433c"} Feb 17 08:58:59 crc kubenswrapper[4813]: I0217 08:58:59.786079 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:58:59 crc kubenswrapper[4813]: I0217 08:58:59.787537 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-778567d7df-csf7r_711327fc-bdf0-4251-a90d-968ec048caa5/console/0.log" Feb 17 08:58:59 crc kubenswrapper[4813]: I0217 08:58:59.787579 4813 generic.go:334] "Generic (PLEG): container finished" podID="711327fc-bdf0-4251-a90d-968ec048caa5" containerID="a092ff3bb949aec18104bcad58bd73bc06e5e6a0c4cd915b5399af53f74151f5" exitCode=2 Feb 17 08:58:59 crc kubenswrapper[4813]: I0217 08:58:59.787665 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-778567d7df-csf7r" event={"ID":"711327fc-bdf0-4251-a90d-968ec048caa5","Type":"ContainerDied","Data":"a092ff3bb949aec18104bcad58bd73bc06e5e6a0c4cd915b5399af53f74151f5"} Feb 17 08:58:59 crc kubenswrapper[4813]: I0217 08:58:59.798431 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:58:59 crc kubenswrapper[4813]: I0217 08:58:59.814192 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-server-0" podStartSLOduration=37.915161368 podStartE2EDuration="51.814176824s" podCreationTimestamp="2026-02-17 08:58:08 +0000 UTC" firstStartedPulling="2026-02-17 08:58:10.407733498 +0000 UTC m=+1038.068494721" lastFinishedPulling="2026-02-17 08:58:24.306748944 +0000 UTC m=+1051.967510177" observedRunningTime="2026-02-17 08:58:59.811403265 +0000 UTC m=+1087.472164488" watchObservedRunningTime="2026-02-17 08:58:59.814176824 +0000 UTC m=+1087.474938047" Feb 17 08:58:59 crc kubenswrapper[4813]: I0217 08:58:59.850198 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podStartSLOduration=37.53498742 podStartE2EDuration="51.850183772s" podCreationTimestamp="2026-02-17 08:58:08 +0000 UTC" firstStartedPulling="2026-02-17 08:58:09.99146946 +0000 UTC m=+1037.652230683" lastFinishedPulling="2026-02-17 08:58:24.306665802 +0000 UTC m=+1051.967427035" observedRunningTime="2026-02-17 08:58:59.844703855 +0000 UTC m=+1087.505465088" watchObservedRunningTime="2026-02-17 08:58:59.850183772 +0000 UTC m=+1087.510944995" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.151762 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-778567d7df-csf7r_711327fc-bdf0-4251-a90d-968ec048caa5/console/0.log" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.152158 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.276699 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/711327fc-bdf0-4251-a90d-968ec048caa5-console-serving-cert\") pod \"711327fc-bdf0-4251-a90d-968ec048caa5\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.276791 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-service-ca\") pod \"711327fc-bdf0-4251-a90d-968ec048caa5\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.276836 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/711327fc-bdf0-4251-a90d-968ec048caa5-console-oauth-config\") pod \"711327fc-bdf0-4251-a90d-968ec048caa5\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.276866 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-trusted-ca-bundle\") pod \"711327fc-bdf0-4251-a90d-968ec048caa5\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.276922 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-console-config\") pod \"711327fc-bdf0-4251-a90d-968ec048caa5\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.277017 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr4j7\" (UniqueName: \"kubernetes.io/projected/711327fc-bdf0-4251-a90d-968ec048caa5-kube-api-access-rr4j7\") pod \"711327fc-bdf0-4251-a90d-968ec048caa5\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.277059 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-oauth-serving-cert\") pod \"711327fc-bdf0-4251-a90d-968ec048caa5\" (UID: \"711327fc-bdf0-4251-a90d-968ec048caa5\") " Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.277617 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-service-ca" (OuterVolumeSpecName: "service-ca") pod "711327fc-bdf0-4251-a90d-968ec048caa5" (UID: "711327fc-bdf0-4251-a90d-968ec048caa5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.277781 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "711327fc-bdf0-4251-a90d-968ec048caa5" (UID: "711327fc-bdf0-4251-a90d-968ec048caa5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.278030 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-console-config" (OuterVolumeSpecName: "console-config") pod "711327fc-bdf0-4251-a90d-968ec048caa5" (UID: "711327fc-bdf0-4251-a90d-968ec048caa5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.284194 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711327fc-bdf0-4251-a90d-968ec048caa5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "711327fc-bdf0-4251-a90d-968ec048caa5" (UID: "711327fc-bdf0-4251-a90d-968ec048caa5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.288419 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "711327fc-bdf0-4251-a90d-968ec048caa5" (UID: "711327fc-bdf0-4251-a90d-968ec048caa5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.291227 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/711327fc-bdf0-4251-a90d-968ec048caa5-kube-api-access-rr4j7" (OuterVolumeSpecName: "kube-api-access-rr4j7") pod "711327fc-bdf0-4251-a90d-968ec048caa5" (UID: "711327fc-bdf0-4251-a90d-968ec048caa5"). InnerVolumeSpecName "kube-api-access-rr4j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.293392 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711327fc-bdf0-4251-a90d-968ec048caa5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "711327fc-bdf0-4251-a90d-968ec048caa5" (UID: "711327fc-bdf0-4251-a90d-968ec048caa5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.378723 4813 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.378762 4813 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/711327fc-bdf0-4251-a90d-968ec048caa5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.378777 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.378789 4813 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/711327fc-bdf0-4251-a90d-968ec048caa5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.378802 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.378815 4813 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/711327fc-bdf0-4251-a90d-968ec048caa5-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.378826 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr4j7\" (UniqueName: \"kubernetes.io/projected/711327fc-bdf0-4251-a90d-968ec048caa5-kube-api-access-rr4j7\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.794703 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-778567d7df-csf7r_711327fc-bdf0-4251-a90d-968ec048caa5/console/0.log" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.794793 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-778567d7df-csf7r" event={"ID":"711327fc-bdf0-4251-a90d-968ec048caa5","Type":"ContainerDied","Data":"24bfe783bfb7fbcda8909429d0ef36a425e5422091955b070af7dc4211a01ab0"} Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.794968 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-778567d7df-csf7r" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.795014 4813 scope.go:117] "RemoveContainer" containerID="a092ff3bb949aec18104bcad58bd73bc06e5e6a0c4cd915b5399af53f74151f5" Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.828055 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-778567d7df-csf7r"] Feb 17 08:59:00 crc kubenswrapper[4813]: I0217 08:59:00.840132 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-778567d7df-csf7r"] Feb 17 08:59:01 crc kubenswrapper[4813]: I0217 08:59:01.123963 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="711327fc-bdf0-4251-a90d-968ec048caa5" path="/var/lib/kubelet/pods/711327fc-bdf0-4251-a90d-968ec048caa5/volumes" Feb 17 08:59:02 crc kubenswrapper[4813]: I0217 08:59:02.834715 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 17 08:59:02 crc kubenswrapper[4813]: I0217 08:59:02.835252 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerName="prometheus" containerID="cri-o://b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81" gracePeriod=600 Feb 17 08:59:02 crc kubenswrapper[4813]: I0217 08:59:02.835370 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerName="config-reloader" containerID="cri-o://ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10" gracePeriod=600 Feb 17 08:59:02 crc kubenswrapper[4813]: I0217 08:59:02.835372 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerName="thanos-sidecar" containerID="cri-o://cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa" gracePeriod=600 Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.314496 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.429386 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\") pod \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.429449 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-1\") pod \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.429504 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-thanos-prometheus-http-client-file\") pod \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.429557 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hf5g\" (UniqueName: \"kubernetes.io/projected/b445db98-91a3-4ca2-9de9-96824d2ba6d2-kube-api-access-9hf5g\") pod \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.429586 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-2\") pod \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.429606 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b445db98-91a3-4ca2-9de9-96824d2ba6d2-config-out\") pod \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.429630 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-web-config\") pod \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.429656 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b445db98-91a3-4ca2-9de9-96824d2ba6d2-tls-assets\") pod \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.429726 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-0\") pod \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.429752 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-config\") pod \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\" (UID: \"b445db98-91a3-4ca2-9de9-96824d2ba6d2\") " Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.429918 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "b445db98-91a3-4ca2-9de9-96824d2ba6d2" (UID: "b445db98-91a3-4ca2-9de9-96824d2ba6d2"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.430264 4813 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.430745 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "b445db98-91a3-4ca2-9de9-96824d2ba6d2" (UID: "b445db98-91a3-4ca2-9de9-96824d2ba6d2"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.431016 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "b445db98-91a3-4ca2-9de9-96824d2ba6d2" (UID: "b445db98-91a3-4ca2-9de9-96824d2ba6d2"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.437409 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b445db98-91a3-4ca2-9de9-96824d2ba6d2-config-out" (OuterVolumeSpecName: "config-out") pod "b445db98-91a3-4ca2-9de9-96824d2ba6d2" (UID: "b445db98-91a3-4ca2-9de9-96824d2ba6d2"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.441488 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-config" (OuterVolumeSpecName: "config") pod "b445db98-91a3-4ca2-9de9-96824d2ba6d2" (UID: "b445db98-91a3-4ca2-9de9-96824d2ba6d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.441565 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "b445db98-91a3-4ca2-9de9-96824d2ba6d2" (UID: "b445db98-91a3-4ca2-9de9-96824d2ba6d2"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.441662 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b445db98-91a3-4ca2-9de9-96824d2ba6d2-kube-api-access-9hf5g" (OuterVolumeSpecName: "kube-api-access-9hf5g") pod "b445db98-91a3-4ca2-9de9-96824d2ba6d2" (UID: "b445db98-91a3-4ca2-9de9-96824d2ba6d2"). InnerVolumeSpecName "kube-api-access-9hf5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.441715 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b445db98-91a3-4ca2-9de9-96824d2ba6d2-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "b445db98-91a3-4ca2-9de9-96824d2ba6d2" (UID: "b445db98-91a3-4ca2-9de9-96824d2ba6d2"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.459744 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "b445db98-91a3-4ca2-9de9-96824d2ba6d2" (UID: "b445db98-91a3-4ca2-9de9-96824d2ba6d2"). InnerVolumeSpecName "pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.468873 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-web-config" (OuterVolumeSpecName: "web-config") pod "b445db98-91a3-4ca2-9de9-96824d2ba6d2" (UID: "b445db98-91a3-4ca2-9de9-96824d2ba6d2"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.531791 4813 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.531828 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hf5g\" (UniqueName: \"kubernetes.io/projected/b445db98-91a3-4ca2-9de9-96824d2ba6d2-kube-api-access-9hf5g\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.531839 4813 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.531850 4813 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b445db98-91a3-4ca2-9de9-96824d2ba6d2-config-out\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.531859 4813 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-web-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.531868 4813 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b445db98-91a3-4ca2-9de9-96824d2ba6d2-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.531877 4813 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b445db98-91a3-4ca2-9de9-96824d2ba6d2-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.531888 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b445db98-91a3-4ca2-9de9-96824d2ba6d2-config\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.531925 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\") on node \"crc\" " Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.548627 4813 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.548770 4813 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379") on node "crc" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.632982 4813 reconciler_common.go:293] "Volume detached for volume \"pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.820476 4813 generic.go:334] "Generic (PLEG): container finished" podID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerID="cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa" exitCode=0 Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.820516 4813 generic.go:334] "Generic (PLEG): container finished" podID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerID="ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10" exitCode=0 Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.820530 4813 generic.go:334] "Generic (PLEG): container finished" podID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerID="b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81" exitCode=0 Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.820542 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"b445db98-91a3-4ca2-9de9-96824d2ba6d2","Type":"ContainerDied","Data":"cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa"} Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.820592 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.820608 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"b445db98-91a3-4ca2-9de9-96824d2ba6d2","Type":"ContainerDied","Data":"ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10"} Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.820632 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"b445db98-91a3-4ca2-9de9-96824d2ba6d2","Type":"ContainerDied","Data":"b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81"} Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.820647 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"b445db98-91a3-4ca2-9de9-96824d2ba6d2","Type":"ContainerDied","Data":"572700edead13c3b673f2ad17c46d2f5eefcefe3deb8e916b0e9e5790e906c3f"} Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.820673 4813 scope.go:117] "RemoveContainer" containerID="cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.840205 4813 scope.go:117] "RemoveContainer" containerID="ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.858530 4813 scope.go:117] "RemoveContainer" containerID="b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.878290 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.880302 4813 scope.go:117] "RemoveContainer" containerID="1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.890257 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.901570 4813 scope.go:117] "RemoveContainer" containerID="cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa" Feb 17 08:59:03 crc kubenswrapper[4813]: E0217 08:59:03.902122 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa\": container with ID starting with cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa not found: ID does not exist" containerID="cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.902187 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa"} err="failed to get container status \"cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa\": rpc error: code = NotFound desc = could not find container \"cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa\": container with ID starting with cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa not found: ID does not exist" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.902224 4813 scope.go:117] "RemoveContainer" containerID="ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10" Feb 17 08:59:03 crc kubenswrapper[4813]: E0217 08:59:03.902619 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10\": container with ID starting with ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10 not found: ID does not exist" containerID="ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.902673 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10"} err="failed to get container status \"ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10\": rpc error: code = NotFound desc = could not find container \"ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10\": container with ID starting with ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10 not found: ID does not exist" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.902720 4813 scope.go:117] "RemoveContainer" containerID="b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81" Feb 17 08:59:03 crc kubenswrapper[4813]: E0217 08:59:03.902985 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81\": container with ID starting with b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81 not found: ID does not exist" containerID="b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.903021 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81"} err="failed to get container status \"b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81\": rpc error: code = NotFound desc = could not find container \"b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81\": container with ID starting with b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81 not found: ID does not exist" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.903040 4813 scope.go:117] "RemoveContainer" containerID="1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3" Feb 17 08:59:03 crc kubenswrapper[4813]: E0217 08:59:03.903263 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3\": container with ID starting with 1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3 not found: ID does not exist" containerID="1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.903292 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3"} err="failed to get container status \"1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3\": rpc error: code = NotFound desc = could not find container \"1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3\": container with ID starting with 1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3 not found: ID does not exist" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.903330 4813 scope.go:117] "RemoveContainer" containerID="cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.903557 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa"} err="failed to get container status \"cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa\": rpc error: code = NotFound desc = could not find container \"cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa\": container with ID starting with cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa not found: ID does not exist" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.903585 4813 scope.go:117] "RemoveContainer" containerID="ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.903813 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10"} err="failed to get container status \"ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10\": rpc error: code = NotFound desc = could not find container \"ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10\": container with ID starting with ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10 not found: ID does not exist" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.903838 4813 scope.go:117] "RemoveContainer" containerID="b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.904048 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81"} err="failed to get container status \"b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81\": rpc error: code = NotFound desc = could not find container \"b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81\": container with ID starting with b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81 not found: ID does not exist" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.904077 4813 scope.go:117] "RemoveContainer" containerID="1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.904285 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3"} err="failed to get container status \"1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3\": rpc error: code = NotFound desc = could not find container \"1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3\": container with ID starting with 1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3 not found: ID does not exist" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.904325 4813 scope.go:117] "RemoveContainer" containerID="cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.904555 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa"} err="failed to get container status \"cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa\": rpc error: code = NotFound desc = could not find container \"cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa\": container with ID starting with cd91a5646580fcd10ee510838cf1b81555d2455818083a1dfe8fd8915c00adaa not found: ID does not exist" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.904584 4813 scope.go:117] "RemoveContainer" containerID="ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.904812 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10"} err="failed to get container status \"ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10\": rpc error: code = NotFound desc = could not find container \"ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10\": container with ID starting with ad3c3818dfa58aeee4a3ae043457c4e5fbe2da4adc62c69381c5097f9c2a7e10 not found: ID does not exist" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.904842 4813 scope.go:117] "RemoveContainer" containerID="b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.905069 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81"} err="failed to get container status \"b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81\": rpc error: code = NotFound desc = could not find container \"b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81\": container with ID starting with b34abd2cd61a71b90d76a9eb5d9ab5bd0ab41325003ba4948c2cf1afebd40d81 not found: ID does not exist" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.905098 4813 scope.go:117] "RemoveContainer" containerID="1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.905318 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3"} err="failed to get container status \"1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3\": rpc error: code = NotFound desc = could not find container \"1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3\": container with ID starting with 1c4ef44ea586d2f47e4c700047ac75ff890704c51524873d6e2d3ece95cd82a3 not found: ID does not exist" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.915188 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 17 08:59:03 crc kubenswrapper[4813]: E0217 08:59:03.915586 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerName="thanos-sidecar" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.915608 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerName="thanos-sidecar" Feb 17 08:59:03 crc kubenswrapper[4813]: E0217 08:59:03.915666 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerName="init-config-reloader" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.915676 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerName="init-config-reloader" Feb 17 08:59:03 crc kubenswrapper[4813]: E0217 08:59:03.915701 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerName="config-reloader" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.915709 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerName="config-reloader" Feb 17 08:59:03 crc kubenswrapper[4813]: E0217 08:59:03.915730 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5007a7b4-94ab-4c00-ba51-bf73132dfbfa" containerName="mariadb-database-create" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.915738 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5007a7b4-94ab-4c00-ba51-bf73132dfbfa" containerName="mariadb-database-create" Feb 17 08:59:03 crc kubenswrapper[4813]: E0217 08:59:03.915748 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6" containerName="mariadb-account-create-update" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.915756 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6" containerName="mariadb-account-create-update" Feb 17 08:59:03 crc kubenswrapper[4813]: E0217 08:59:03.915769 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57131d8c-9d5e-492b-9141-43913e805dd1" containerName="mariadb-account-create-update" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.915778 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="57131d8c-9d5e-492b-9141-43913e805dd1" containerName="mariadb-account-create-update" Feb 17 08:59:03 crc kubenswrapper[4813]: E0217 08:59:03.915795 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="711327fc-bdf0-4251-a90d-968ec048caa5" containerName="console" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.915804 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="711327fc-bdf0-4251-a90d-968ec048caa5" containerName="console" Feb 17 08:59:03 crc kubenswrapper[4813]: E0217 08:59:03.915822 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerName="prometheus" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.915830 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerName="prometheus" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.916016 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerName="thanos-sidecar" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.916032 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerName="prometheus" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.916041 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="57131d8c-9d5e-492b-9141-43913e805dd1" containerName="mariadb-account-create-update" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.916053 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6" containerName="mariadb-account-create-update" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.916066 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="711327fc-bdf0-4251-a90d-968ec048caa5" containerName="console" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.916074 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" containerName="config-reloader" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.916086 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5007a7b4-94ab-4c00-ba51-bf73132dfbfa" containerName="mariadb-database-create" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.919075 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.920850 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.921387 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-1" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.921923 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-2" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.921975 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-metric-storage-prometheus-svc" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.922646 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.922968 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-zrcnr" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.923233 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.923934 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.933034 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Feb 17 08:59:03 crc kubenswrapper[4813]: I0217 08:59:03.951643 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.038966 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.039351 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.039389 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.039527 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.039675 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-config\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.039725 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.039857 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.039931 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.039987 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.040038 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.040077 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq6gn\" (UniqueName: \"kubernetes.io/projected/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-kube-api-access-gq6gn\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.040122 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.040214 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.142147 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.142204 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.142235 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.142263 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.142333 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-config\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.142363 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.142385 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.142410 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.142433 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.142470 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.142640 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq6gn\" (UniqueName: \"kubernetes.io/projected/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-kube-api-access-gq6gn\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.142675 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.142751 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.143462 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.143477 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.143848 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.147572 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.147610 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1e200dfcdbaa5702b94bbdc90ab350504c0d12de0768779fa6f433eb34cbbe0c/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.149277 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.149712 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.151767 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.153240 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.155840 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.157452 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.157594 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-config\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.158145 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.170769 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq6gn\" (UniqueName: \"kubernetes.io/projected/1335e70a-64d4-40c2-9c87-ec4dfdbddfcf-kube-api-access-gq6gn\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.182673 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e6586eb-25fd-45e0-99ca-2f2aae016379\") pod \"prometheus-metric-storage-0\" (UID: \"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.248862 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.765193 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Feb 17 08:59:04 crc kubenswrapper[4813]: W0217 08:59:04.775268 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1335e70a_64d4_40c2_9c87_ec4dfdbddfcf.slice/crio-a85b732d7865e9b4f6833c3ccad064c2e609032b7396b36f8813a8891f1b0792 WatchSource:0}: Error finding container a85b732d7865e9b4f6833c3ccad064c2e609032b7396b36f8813a8891f1b0792: Status 404 returned error can't find the container with id a85b732d7865e9b4f6833c3ccad064c2e609032b7396b36f8813a8891f1b0792 Feb 17 08:59:04 crc kubenswrapper[4813]: I0217 08:59:04.830449 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf","Type":"ContainerStarted","Data":"a85b732d7865e9b4f6833c3ccad064c2e609032b7396b36f8813a8891f1b0792"} Feb 17 08:59:05 crc kubenswrapper[4813]: I0217 08:59:05.129149 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b445db98-91a3-4ca2-9de9-96824d2ba6d2" path="/var/lib/kubelet/pods/b445db98-91a3-4ca2-9de9-96824d2ba6d2/volumes" Feb 17 08:59:05 crc kubenswrapper[4813]: I0217 08:59:05.165169 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:59:05 crc kubenswrapper[4813]: I0217 08:59:05.165234 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:59:08 crc kubenswrapper[4813]: I0217 08:59:08.873013 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf","Type":"ContainerStarted","Data":"a2676902f3b31b8cfbb268952f2c03b4c74e10626367115c496314abf9bd0685"} Feb 17 08:59:09 crc kubenswrapper[4813]: I0217 08:59:09.479596 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Feb 17 08:59:09 crc kubenswrapper[4813]: I0217 08:59:09.930607 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-server-0" Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.596348 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-sync-vpgcw"] Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.598067 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-vpgcw" Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.606860 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.608681 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-86q5v" Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.609278 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.609293 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.617457 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-vpgcw"] Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.677256 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a0f4474-1437-4124-8bbd-132f173fda58-config-data\") pod \"keystone-db-sync-vpgcw\" (UID: \"4a0f4474-1437-4124-8bbd-132f173fda58\") " pod="watcher-kuttl-default/keystone-db-sync-vpgcw" Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.677339 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0f4474-1437-4124-8bbd-132f173fda58-combined-ca-bundle\") pod \"keystone-db-sync-vpgcw\" (UID: \"4a0f4474-1437-4124-8bbd-132f173fda58\") " pod="watcher-kuttl-default/keystone-db-sync-vpgcw" Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.677363 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncpmm\" (UniqueName: \"kubernetes.io/projected/4a0f4474-1437-4124-8bbd-132f173fda58-kube-api-access-ncpmm\") pod \"keystone-db-sync-vpgcw\" (UID: \"4a0f4474-1437-4124-8bbd-132f173fda58\") " pod="watcher-kuttl-default/keystone-db-sync-vpgcw" Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.778762 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a0f4474-1437-4124-8bbd-132f173fda58-config-data\") pod \"keystone-db-sync-vpgcw\" (UID: \"4a0f4474-1437-4124-8bbd-132f173fda58\") " pod="watcher-kuttl-default/keystone-db-sync-vpgcw" Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.778847 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0f4474-1437-4124-8bbd-132f173fda58-combined-ca-bundle\") pod \"keystone-db-sync-vpgcw\" (UID: \"4a0f4474-1437-4124-8bbd-132f173fda58\") " pod="watcher-kuttl-default/keystone-db-sync-vpgcw" Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.778874 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncpmm\" (UniqueName: \"kubernetes.io/projected/4a0f4474-1437-4124-8bbd-132f173fda58-kube-api-access-ncpmm\") pod \"keystone-db-sync-vpgcw\" (UID: \"4a0f4474-1437-4124-8bbd-132f173fda58\") " pod="watcher-kuttl-default/keystone-db-sync-vpgcw" Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.787508 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a0f4474-1437-4124-8bbd-132f173fda58-config-data\") pod \"keystone-db-sync-vpgcw\" (UID: \"4a0f4474-1437-4124-8bbd-132f173fda58\") " pod="watcher-kuttl-default/keystone-db-sync-vpgcw" Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.791928 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0f4474-1437-4124-8bbd-132f173fda58-combined-ca-bundle\") pod \"keystone-db-sync-vpgcw\" (UID: \"4a0f4474-1437-4124-8bbd-132f173fda58\") " pod="watcher-kuttl-default/keystone-db-sync-vpgcw" Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.795855 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncpmm\" (UniqueName: \"kubernetes.io/projected/4a0f4474-1437-4124-8bbd-132f173fda58-kube-api-access-ncpmm\") pod \"keystone-db-sync-vpgcw\" (UID: \"4a0f4474-1437-4124-8bbd-132f173fda58\") " pod="watcher-kuttl-default/keystone-db-sync-vpgcw" Feb 17 08:59:11 crc kubenswrapper[4813]: I0217 08:59:11.916837 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-vpgcw" Feb 17 08:59:12 crc kubenswrapper[4813]: I0217 08:59:12.380904 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-vpgcw"] Feb 17 08:59:12 crc kubenswrapper[4813]: I0217 08:59:12.913190 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-vpgcw" event={"ID":"4a0f4474-1437-4124-8bbd-132f173fda58","Type":"ContainerStarted","Data":"55797f9a5f8fedaeba12bfd2a64969920e767285fb1261606c4dda0c46ec9b1a"} Feb 17 08:59:15 crc kubenswrapper[4813]: I0217 08:59:15.941437 4813 generic.go:334] "Generic (PLEG): container finished" podID="1335e70a-64d4-40c2-9c87-ec4dfdbddfcf" containerID="a2676902f3b31b8cfbb268952f2c03b4c74e10626367115c496314abf9bd0685" exitCode=0 Feb 17 08:59:15 crc kubenswrapper[4813]: I0217 08:59:15.941510 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf","Type":"ContainerDied","Data":"a2676902f3b31b8cfbb268952f2c03b4c74e10626367115c496314abf9bd0685"} Feb 17 08:59:20 crc kubenswrapper[4813]: I0217 08:59:20.992378 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf","Type":"ContainerStarted","Data":"aaba052dbd504e588441b9c38b1c950d7071ec74946f29dacb96210bce4759e8"} Feb 17 08:59:20 crc kubenswrapper[4813]: I0217 08:59:20.995042 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-vpgcw" event={"ID":"4a0f4474-1437-4124-8bbd-132f173fda58","Type":"ContainerStarted","Data":"c488a8227b16bf2d099927a19350c4626650b68d38197779b703abe7af59394f"} Feb 17 08:59:21 crc kubenswrapper[4813]: I0217 08:59:21.028678 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-db-sync-vpgcw" podStartSLOduration=2.378613035 podStartE2EDuration="10.028653957s" podCreationTimestamp="2026-02-17 08:59:11 +0000 UTC" firstStartedPulling="2026-02-17 08:59:12.387496805 +0000 UTC m=+1100.048258028" lastFinishedPulling="2026-02-17 08:59:20.037537717 +0000 UTC m=+1107.698298950" observedRunningTime="2026-02-17 08:59:21.020733321 +0000 UTC m=+1108.681494614" watchObservedRunningTime="2026-02-17 08:59:21.028653957 +0000 UTC m=+1108.689415190" Feb 17 08:59:23 crc kubenswrapper[4813]: I0217 08:59:23.013623 4813 generic.go:334] "Generic (PLEG): container finished" podID="4a0f4474-1437-4124-8bbd-132f173fda58" containerID="c488a8227b16bf2d099927a19350c4626650b68d38197779b703abe7af59394f" exitCode=0 Feb 17 08:59:23 crc kubenswrapper[4813]: I0217 08:59:23.013997 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-vpgcw" event={"ID":"4a0f4474-1437-4124-8bbd-132f173fda58","Type":"ContainerDied","Data":"c488a8227b16bf2d099927a19350c4626650b68d38197779b703abe7af59394f"} Feb 17 08:59:24 crc kubenswrapper[4813]: I0217 08:59:24.026302 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf","Type":"ContainerStarted","Data":"23caf4bb4050abe5f46f9f6cf3273e75cf3d1a327e66eb630b5372e1f82faf84"} Feb 17 08:59:24 crc kubenswrapper[4813]: I0217 08:59:24.026798 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"1335e70a-64d4-40c2-9c87-ec4dfdbddfcf","Type":"ContainerStarted","Data":"cd172154e5b865dfcf6ec6790261e587362646bc40631c36a4f7fdec410de450"} Feb 17 08:59:24 crc kubenswrapper[4813]: I0217 08:59:24.072042 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=21.072022465 podStartE2EDuration="21.072022465s" podCreationTimestamp="2026-02-17 08:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:59:24.066432786 +0000 UTC m=+1111.727194079" watchObservedRunningTime="2026-02-17 08:59:24.072022465 +0000 UTC m=+1111.732783698" Feb 17 08:59:24 crc kubenswrapper[4813]: I0217 08:59:24.258745 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:24 crc kubenswrapper[4813]: I0217 08:59:24.403068 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-vpgcw" Feb 17 08:59:24 crc kubenswrapper[4813]: I0217 08:59:24.591614 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a0f4474-1437-4124-8bbd-132f173fda58-config-data\") pod \"4a0f4474-1437-4124-8bbd-132f173fda58\" (UID: \"4a0f4474-1437-4124-8bbd-132f173fda58\") " Feb 17 08:59:24 crc kubenswrapper[4813]: I0217 08:59:24.591767 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0f4474-1437-4124-8bbd-132f173fda58-combined-ca-bundle\") pod \"4a0f4474-1437-4124-8bbd-132f173fda58\" (UID: \"4a0f4474-1437-4124-8bbd-132f173fda58\") " Feb 17 08:59:24 crc kubenswrapper[4813]: I0217 08:59:24.591866 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncpmm\" (UniqueName: \"kubernetes.io/projected/4a0f4474-1437-4124-8bbd-132f173fda58-kube-api-access-ncpmm\") pod \"4a0f4474-1437-4124-8bbd-132f173fda58\" (UID: \"4a0f4474-1437-4124-8bbd-132f173fda58\") " Feb 17 08:59:24 crc kubenswrapper[4813]: I0217 08:59:24.600583 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a0f4474-1437-4124-8bbd-132f173fda58-kube-api-access-ncpmm" (OuterVolumeSpecName: "kube-api-access-ncpmm") pod "4a0f4474-1437-4124-8bbd-132f173fda58" (UID: "4a0f4474-1437-4124-8bbd-132f173fda58"). InnerVolumeSpecName "kube-api-access-ncpmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:59:24 crc kubenswrapper[4813]: I0217 08:59:24.620014 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a0f4474-1437-4124-8bbd-132f173fda58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a0f4474-1437-4124-8bbd-132f173fda58" (UID: "4a0f4474-1437-4124-8bbd-132f173fda58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:24 crc kubenswrapper[4813]: I0217 08:59:24.666578 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a0f4474-1437-4124-8bbd-132f173fda58-config-data" (OuterVolumeSpecName: "config-data") pod "4a0f4474-1437-4124-8bbd-132f173fda58" (UID: "4a0f4474-1437-4124-8bbd-132f173fda58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:24 crc kubenswrapper[4813]: I0217 08:59:24.693809 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncpmm\" (UniqueName: \"kubernetes.io/projected/4a0f4474-1437-4124-8bbd-132f173fda58-kube-api-access-ncpmm\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:24 crc kubenswrapper[4813]: I0217 08:59:24.693847 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a0f4474-1437-4124-8bbd-132f173fda58-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:24 crc kubenswrapper[4813]: I0217 08:59:24.693862 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0f4474-1437-4124-8bbd-132f173fda58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.036811 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-vpgcw" event={"ID":"4a0f4474-1437-4124-8bbd-132f173fda58","Type":"ContainerDied","Data":"55797f9a5f8fedaeba12bfd2a64969920e767285fb1261606c4dda0c46ec9b1a"} Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.036840 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-vpgcw" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.036871 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55797f9a5f8fedaeba12bfd2a64969920e767285fb1261606c4dda0c46ec9b1a" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.264522 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-vv4fr"] Feb 17 08:59:25 crc kubenswrapper[4813]: E0217 08:59:25.265576 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0f4474-1437-4124-8bbd-132f173fda58" containerName="keystone-db-sync" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.265609 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0f4474-1437-4124-8bbd-132f173fda58" containerName="keystone-db-sync" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.266008 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a0f4474-1437-4124-8bbd-132f173fda58" containerName="keystone-db-sync" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.266702 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.271593 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.271792 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.271896 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.272012 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.272382 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-86q5v" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.291207 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-vv4fr"] Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.404630 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fttdv\" (UniqueName: \"kubernetes.io/projected/5fc77ad3-104d-4a49-8317-c8a1d9204139-kube-api-access-fttdv\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.404784 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-credential-keys\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.404864 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-scripts\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.404882 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-config-data\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.405025 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-combined-ca-bundle\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.405094 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-fernet-keys\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.448483 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.450667 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.454547 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.455032 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.468493 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.506331 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-credential-keys\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.506403 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-scripts\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.506425 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-config-data\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.506467 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-combined-ca-bundle\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.506490 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-fernet-keys\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.506574 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fttdv\" (UniqueName: \"kubernetes.io/projected/5fc77ad3-104d-4a49-8317-c8a1d9204139-kube-api-access-fttdv\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.514781 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-credential-keys\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.514877 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-fernet-keys\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.516666 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-combined-ca-bundle\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.516799 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-config-data\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.518243 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-scripts\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.527765 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fttdv\" (UniqueName: \"kubernetes.io/projected/5fc77ad3-104d-4a49-8317-c8a1d9204139-kube-api-access-fttdv\") pod \"keystone-bootstrap-vv4fr\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.597601 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.608114 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-config-data\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.608159 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5516ae6d-f229-4b34-8510-6036fdc057d1-log-httpd\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.608195 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-scripts\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.608340 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5516ae6d-f229-4b34-8510-6036fdc057d1-run-httpd\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.608389 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.608439 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdldg\" (UniqueName: \"kubernetes.io/projected/5516ae6d-f229-4b34-8510-6036fdc057d1-kube-api-access-qdldg\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.608539 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.709937 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-config-data\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.710012 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5516ae6d-f229-4b34-8510-6036fdc057d1-log-httpd\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.710443 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-scripts\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.710662 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5516ae6d-f229-4b34-8510-6036fdc057d1-log-httpd\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.710756 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5516ae6d-f229-4b34-8510-6036fdc057d1-run-httpd\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.710906 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.710948 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdldg\" (UniqueName: \"kubernetes.io/projected/5516ae6d-f229-4b34-8510-6036fdc057d1-kube-api-access-qdldg\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.711019 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5516ae6d-f229-4b34-8510-6036fdc057d1-run-httpd\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.711141 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.716195 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-scripts\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.716567 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.726961 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdldg\" (UniqueName: \"kubernetes.io/projected/5516ae6d-f229-4b34-8510-6036fdc057d1-kube-api-access-qdldg\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.732401 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-config-data\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.733113 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:25 crc kubenswrapper[4813]: I0217 08:59:25.766224 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:26 crc kubenswrapper[4813]: I0217 08:59:26.024211 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-vv4fr"] Feb 17 08:59:26 crc kubenswrapper[4813]: I0217 08:59:26.045051 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" event={"ID":"5fc77ad3-104d-4a49-8317-c8a1d9204139","Type":"ContainerStarted","Data":"30457641d59002339e1bb18925a3304c6640c2d435b8ea17d5f143cce0761ff9"} Feb 17 08:59:26 crc kubenswrapper[4813]: I0217 08:59:26.223865 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 08:59:27 crc kubenswrapper[4813]: I0217 08:59:27.055424 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5516ae6d-f229-4b34-8510-6036fdc057d1","Type":"ContainerStarted","Data":"311a9b748297b30319f726cc58af29b6a7cd393831a6b94708c8ff76286ccb72"} Feb 17 08:59:27 crc kubenswrapper[4813]: I0217 08:59:27.056751 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" event={"ID":"5fc77ad3-104d-4a49-8317-c8a1d9204139","Type":"ContainerStarted","Data":"74c3931972a17dcfc60833eee3a0251f98e10b8c212d09d35aa97bcc0c193273"} Feb 17 08:59:27 crc kubenswrapper[4813]: I0217 08:59:27.075095 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" podStartSLOduration=2.075080142 podStartE2EDuration="2.075080142s" podCreationTimestamp="2026-02-17 08:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:59:27.071118939 +0000 UTC m=+1114.731880162" watchObservedRunningTime="2026-02-17 08:59:27.075080142 +0000 UTC m=+1114.735841365" Feb 17 08:59:27 crc kubenswrapper[4813]: I0217 08:59:27.363244 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 08:59:30 crc kubenswrapper[4813]: I0217 08:59:30.087649 4813 generic.go:334] "Generic (PLEG): container finished" podID="5fc77ad3-104d-4a49-8317-c8a1d9204139" containerID="74c3931972a17dcfc60833eee3a0251f98e10b8c212d09d35aa97bcc0c193273" exitCode=0 Feb 17 08:59:30 crc kubenswrapper[4813]: I0217 08:59:30.087710 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" event={"ID":"5fc77ad3-104d-4a49-8317-c8a1d9204139","Type":"ContainerDied","Data":"74c3931972a17dcfc60833eee3a0251f98e10b8c212d09d35aa97bcc0c193273"} Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.098439 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5516ae6d-f229-4b34-8510-6036fdc057d1","Type":"ContainerStarted","Data":"9cfe3a8f152c8092ed1714ff724c7f816478031fc4f05f7885125047c68c1d13"} Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.445741 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.618361 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fttdv\" (UniqueName: \"kubernetes.io/projected/5fc77ad3-104d-4a49-8317-c8a1d9204139-kube-api-access-fttdv\") pod \"5fc77ad3-104d-4a49-8317-c8a1d9204139\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.618516 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-config-data\") pod \"5fc77ad3-104d-4a49-8317-c8a1d9204139\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.618541 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-combined-ca-bundle\") pod \"5fc77ad3-104d-4a49-8317-c8a1d9204139\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.618566 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-scripts\") pod \"5fc77ad3-104d-4a49-8317-c8a1d9204139\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.618595 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-credential-keys\") pod \"5fc77ad3-104d-4a49-8317-c8a1d9204139\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.618617 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-fernet-keys\") pod \"5fc77ad3-104d-4a49-8317-c8a1d9204139\" (UID: \"5fc77ad3-104d-4a49-8317-c8a1d9204139\") " Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.624624 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-scripts" (OuterVolumeSpecName: "scripts") pod "5fc77ad3-104d-4a49-8317-c8a1d9204139" (UID: "5fc77ad3-104d-4a49-8317-c8a1d9204139"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.625504 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5fc77ad3-104d-4a49-8317-c8a1d9204139" (UID: "5fc77ad3-104d-4a49-8317-c8a1d9204139"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.626654 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5fc77ad3-104d-4a49-8317-c8a1d9204139" (UID: "5fc77ad3-104d-4a49-8317-c8a1d9204139"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.626771 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fc77ad3-104d-4a49-8317-c8a1d9204139-kube-api-access-fttdv" (OuterVolumeSpecName: "kube-api-access-fttdv") pod "5fc77ad3-104d-4a49-8317-c8a1d9204139" (UID: "5fc77ad3-104d-4a49-8317-c8a1d9204139"). InnerVolumeSpecName "kube-api-access-fttdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.653902 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-config-data" (OuterVolumeSpecName: "config-data") pod "5fc77ad3-104d-4a49-8317-c8a1d9204139" (UID: "5fc77ad3-104d-4a49-8317-c8a1d9204139"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.663949 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fc77ad3-104d-4a49-8317-c8a1d9204139" (UID: "5fc77ad3-104d-4a49-8317-c8a1d9204139"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.720159 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fttdv\" (UniqueName: \"kubernetes.io/projected/5fc77ad3-104d-4a49-8317-c8a1d9204139-kube-api-access-fttdv\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.720201 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.720215 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.720225 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.720236 4813 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:31 crc kubenswrapper[4813]: I0217 08:59:31.720244 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5fc77ad3-104d-4a49-8317-c8a1d9204139-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.113443 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" event={"ID":"5fc77ad3-104d-4a49-8317-c8a1d9204139","Type":"ContainerDied","Data":"30457641d59002339e1bb18925a3304c6640c2d435b8ea17d5f143cce0761ff9"} Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.113483 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30457641d59002339e1bb18925a3304c6640c2d435b8ea17d5f143cce0761ff9" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.113587 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-vv4fr" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.185785 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-vv4fr"] Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.194226 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-vv4fr"] Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.301053 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-xh5xx"] Feb 17 08:59:32 crc kubenswrapper[4813]: E0217 08:59:32.302840 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fc77ad3-104d-4a49-8317-c8a1d9204139" containerName="keystone-bootstrap" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.302869 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fc77ad3-104d-4a49-8317-c8a1d9204139" containerName="keystone-bootstrap" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.303174 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fc77ad3-104d-4a49-8317-c8a1d9204139" containerName="keystone-bootstrap" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.304480 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.310946 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-86q5v" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.311139 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.311273 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.311439 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.312826 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-xh5xx"] Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.315607 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.431520 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-scripts\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.431606 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-fernet-keys\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.431789 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v96ft\" (UniqueName: \"kubernetes.io/projected/01612876-0452-4954-8ab4-c101d091a500-kube-api-access-v96ft\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.431972 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-config-data\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.432048 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-combined-ca-bundle\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.432152 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-credential-keys\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.534108 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-combined-ca-bundle\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.534684 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-credential-keys\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.534869 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-scripts\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.535040 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-fernet-keys\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.535153 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v96ft\" (UniqueName: \"kubernetes.io/projected/01612876-0452-4954-8ab4-c101d091a500-kube-api-access-v96ft\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.535326 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-config-data\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.551507 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-config-data\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.551561 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-credential-keys\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.561015 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-combined-ca-bundle\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.567186 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-fernet-keys\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.574896 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-scripts\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.577958 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v96ft\" (UniqueName: \"kubernetes.io/projected/01612876-0452-4954-8ab4-c101d091a500-kube-api-access-v96ft\") pod \"keystone-bootstrap-xh5xx\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:32 crc kubenswrapper[4813]: I0217 08:59:32.634960 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:33 crc kubenswrapper[4813]: I0217 08:59:33.129909 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fc77ad3-104d-4a49-8317-c8a1d9204139" path="/var/lib/kubelet/pods/5fc77ad3-104d-4a49-8317-c8a1d9204139/volumes" Feb 17 08:59:33 crc kubenswrapper[4813]: I0217 08:59:33.438564 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-xh5xx"] Feb 17 08:59:34 crc kubenswrapper[4813]: I0217 08:59:34.136292 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5516ae6d-f229-4b34-8510-6036fdc057d1","Type":"ContainerStarted","Data":"81577fb7bfcbd42b1c6d00a2e4937123f00bbd3885e8a601c1d2dd45798fe6be"} Feb 17 08:59:34 crc kubenswrapper[4813]: I0217 08:59:34.137385 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" event={"ID":"01612876-0452-4954-8ab4-c101d091a500","Type":"ContainerStarted","Data":"91ab1f88022585747f5adf3ced1f2693c996a0b906e7fb8cd3b6ee12b86e90b6"} Feb 17 08:59:34 crc kubenswrapper[4813]: I0217 08:59:34.137430 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" event={"ID":"01612876-0452-4954-8ab4-c101d091a500","Type":"ContainerStarted","Data":"10a9e89aecc332fc3fb8434167230e902a1d7ab799fa7b329d7bb1ef037d0750"} Feb 17 08:59:34 crc kubenswrapper[4813]: I0217 08:59:34.156666 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" podStartSLOduration=2.156651335 podStartE2EDuration="2.156651335s" podCreationTimestamp="2026-02-17 08:59:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:59:34.151468547 +0000 UTC m=+1121.812229770" watchObservedRunningTime="2026-02-17 08:59:34.156651335 +0000 UTC m=+1121.817412558" Feb 17 08:59:34 crc kubenswrapper[4813]: I0217 08:59:34.249555 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:34 crc kubenswrapper[4813]: I0217 08:59:34.256572 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:35 crc kubenswrapper[4813]: I0217 08:59:35.162270 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Feb 17 08:59:35 crc kubenswrapper[4813]: I0217 08:59:35.165445 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 08:59:35 crc kubenswrapper[4813]: I0217 08:59:35.165489 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 08:59:37 crc kubenswrapper[4813]: I0217 08:59:37.176852 4813 generic.go:334] "Generic (PLEG): container finished" podID="01612876-0452-4954-8ab4-c101d091a500" containerID="91ab1f88022585747f5adf3ced1f2693c996a0b906e7fb8cd3b6ee12b86e90b6" exitCode=0 Feb 17 08:59:37 crc kubenswrapper[4813]: I0217 08:59:37.176928 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" event={"ID":"01612876-0452-4954-8ab4-c101d091a500","Type":"ContainerDied","Data":"91ab1f88022585747f5adf3ced1f2693c996a0b906e7fb8cd3b6ee12b86e90b6"} Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.188584 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5516ae6d-f229-4b34-8510-6036fdc057d1","Type":"ContainerStarted","Data":"b975509cd8f496dbd3f74422c4e8f7a764b0af2f4080efd20e8d0e654ec0b4b5"} Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.510582 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.651908 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-scripts\") pod \"01612876-0452-4954-8ab4-c101d091a500\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.652055 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-combined-ca-bundle\") pod \"01612876-0452-4954-8ab4-c101d091a500\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.652098 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-config-data\") pod \"01612876-0452-4954-8ab4-c101d091a500\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.652116 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-fernet-keys\") pod \"01612876-0452-4954-8ab4-c101d091a500\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.652156 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-credential-keys\") pod \"01612876-0452-4954-8ab4-c101d091a500\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.652181 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v96ft\" (UniqueName: \"kubernetes.io/projected/01612876-0452-4954-8ab4-c101d091a500-kube-api-access-v96ft\") pod \"01612876-0452-4954-8ab4-c101d091a500\" (UID: \"01612876-0452-4954-8ab4-c101d091a500\") " Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.657940 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "01612876-0452-4954-8ab4-c101d091a500" (UID: "01612876-0452-4954-8ab4-c101d091a500"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.658296 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-scripts" (OuterVolumeSpecName: "scripts") pod "01612876-0452-4954-8ab4-c101d091a500" (UID: "01612876-0452-4954-8ab4-c101d091a500"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.659183 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01612876-0452-4954-8ab4-c101d091a500-kube-api-access-v96ft" (OuterVolumeSpecName: "kube-api-access-v96ft") pod "01612876-0452-4954-8ab4-c101d091a500" (UID: "01612876-0452-4954-8ab4-c101d091a500"). InnerVolumeSpecName "kube-api-access-v96ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.670417 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "01612876-0452-4954-8ab4-c101d091a500" (UID: "01612876-0452-4954-8ab4-c101d091a500"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.688517 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-config-data" (OuterVolumeSpecName: "config-data") pod "01612876-0452-4954-8ab4-c101d091a500" (UID: "01612876-0452-4954-8ab4-c101d091a500"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.701995 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01612876-0452-4954-8ab4-c101d091a500" (UID: "01612876-0452-4954-8ab4-c101d091a500"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.754048 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.754078 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.754087 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.754095 4813 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.754103 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v96ft\" (UniqueName: \"kubernetes.io/projected/01612876-0452-4954-8ab4-c101d091a500-kube-api-access-v96ft\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:38 crc kubenswrapper[4813]: I0217 08:59:38.754112 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01612876-0452-4954-8ab4-c101d091a500-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.205391 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" event={"ID":"01612876-0452-4954-8ab4-c101d091a500","Type":"ContainerDied","Data":"10a9e89aecc332fc3fb8434167230e902a1d7ab799fa7b329d7bb1ef037d0750"} Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.205459 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10a9e89aecc332fc3fb8434167230e902a1d7ab799fa7b329d7bb1ef037d0750" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.205545 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-xh5xx" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.312389 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-86bc75976-kcxmb"] Feb 17 08:59:39 crc kubenswrapper[4813]: E0217 08:59:39.313070 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01612876-0452-4954-8ab4-c101d091a500" containerName="keystone-bootstrap" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.313085 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="01612876-0452-4954-8ab4-c101d091a500" containerName="keystone-bootstrap" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.313336 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="01612876-0452-4954-8ab4-c101d091a500" containerName="keystone-bootstrap" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.313961 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.319006 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.319351 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.319579 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-86q5v" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.319786 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.319951 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-public-svc" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.321159 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-internal-svc" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.331542 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-86bc75976-kcxmb"] Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.363137 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-internal-tls-certs\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.363429 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-public-tls-certs\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.363543 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-credential-keys\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.363711 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-fernet-keys\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.363829 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-config-data\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.363906 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-combined-ca-bundle\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.363990 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-scripts\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.364107 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqws5\" (UniqueName: \"kubernetes.io/projected/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-kube-api-access-tqws5\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.466483 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-internal-tls-certs\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.467035 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-public-tls-certs\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.467184 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-credential-keys\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.467351 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-fernet-keys\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.467556 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-config-data\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.467720 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-combined-ca-bundle\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.467868 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-scripts\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.467995 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqws5\" (UniqueName: \"kubernetes.io/projected/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-kube-api-access-tqws5\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.470817 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-internal-tls-certs\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.471200 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-config-data\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.473221 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-fernet-keys\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.473588 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-scripts\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.473719 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-credential-keys\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.480009 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-combined-ca-bundle\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.483656 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-public-tls-certs\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.487610 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqws5\" (UniqueName: \"kubernetes.io/projected/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-kube-api-access-tqws5\") pod \"keystone-86bc75976-kcxmb\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:39 crc kubenswrapper[4813]: I0217 08:59:39.652847 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:40 crc kubenswrapper[4813]: I0217 08:59:40.219548 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-86bc75976-kcxmb"] Feb 17 08:59:40 crc kubenswrapper[4813]: W0217 08:59:40.225186 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad4a78b2_30d4_4e2a_a5a0_c2a63c92f742.slice/crio-38172d23e443e5399ec5a4e34a61d3b1abd5f1e1e61424dbad2d5a5fe5c62138 WatchSource:0}: Error finding container 38172d23e443e5399ec5a4e34a61d3b1abd5f1e1e61424dbad2d5a5fe5c62138: Status 404 returned error can't find the container with id 38172d23e443e5399ec5a4e34a61d3b1abd5f1e1e61424dbad2d5a5fe5c62138 Feb 17 08:59:41 crc kubenswrapper[4813]: I0217 08:59:41.224876 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" event={"ID":"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742","Type":"ContainerStarted","Data":"b8724b5d7e5684c62837c463d4092d4df26104c55f3606d43afb97e56e15a887"} Feb 17 08:59:41 crc kubenswrapper[4813]: I0217 08:59:41.225335 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 08:59:41 crc kubenswrapper[4813]: I0217 08:59:41.225358 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" event={"ID":"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742","Type":"ContainerStarted","Data":"38172d23e443e5399ec5a4e34a61d3b1abd5f1e1e61424dbad2d5a5fe5c62138"} Feb 17 08:59:41 crc kubenswrapper[4813]: I0217 08:59:41.249848 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" podStartSLOduration=2.249828148 podStartE2EDuration="2.249828148s" podCreationTimestamp="2026-02-17 08:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 08:59:41.244740353 +0000 UTC m=+1128.905501596" watchObservedRunningTime="2026-02-17 08:59:41.249828148 +0000 UTC m=+1128.910589381" Feb 17 08:59:47 crc kubenswrapper[4813]: I0217 08:59:47.285830 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5516ae6d-f229-4b34-8510-6036fdc057d1","Type":"ContainerStarted","Data":"30903cff83a66aa30bae5d88945b3b2c957f259056a951fbfd8a8f5fa269810f"} Feb 17 08:59:47 crc kubenswrapper[4813]: I0217 08:59:47.286525 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:47 crc kubenswrapper[4813]: I0217 08:59:47.286063 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerName="ceilometer-notification-agent" containerID="cri-o://81577fb7bfcbd42b1c6d00a2e4937123f00bbd3885e8a601c1d2dd45798fe6be" gracePeriod=30 Feb 17 08:59:47 crc kubenswrapper[4813]: I0217 08:59:47.286062 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerName="proxy-httpd" containerID="cri-o://30903cff83a66aa30bae5d88945b3b2c957f259056a951fbfd8a8f5fa269810f" gracePeriod=30 Feb 17 08:59:47 crc kubenswrapper[4813]: I0217 08:59:47.286088 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerName="sg-core" containerID="cri-o://b975509cd8f496dbd3f74422c4e8f7a764b0af2f4080efd20e8d0e654ec0b4b5" gracePeriod=30 Feb 17 08:59:47 crc kubenswrapper[4813]: I0217 08:59:47.285984 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerName="ceilometer-central-agent" containerID="cri-o://9cfe3a8f152c8092ed1714ff724c7f816478031fc4f05f7885125047c68c1d13" gracePeriod=30 Feb 17 08:59:47 crc kubenswrapper[4813]: I0217 08:59:47.314694 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.652924949 podStartE2EDuration="22.314672639s" podCreationTimestamp="2026-02-17 08:59:25 +0000 UTC" firstStartedPulling="2026-02-17 08:59:26.232748809 +0000 UTC m=+1113.893510032" lastFinishedPulling="2026-02-17 08:59:46.894496499 +0000 UTC m=+1134.555257722" observedRunningTime="2026-02-17 08:59:47.30807599 +0000 UTC m=+1134.968837213" watchObservedRunningTime="2026-02-17 08:59:47.314672639 +0000 UTC m=+1134.975433862" Feb 17 08:59:48 crc kubenswrapper[4813]: I0217 08:59:48.304591 4813 generic.go:334] "Generic (PLEG): container finished" podID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerID="30903cff83a66aa30bae5d88945b3b2c957f259056a951fbfd8a8f5fa269810f" exitCode=0 Feb 17 08:59:48 crc kubenswrapper[4813]: I0217 08:59:48.304927 4813 generic.go:334] "Generic (PLEG): container finished" podID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerID="b975509cd8f496dbd3f74422c4e8f7a764b0af2f4080efd20e8d0e654ec0b4b5" exitCode=2 Feb 17 08:59:48 crc kubenswrapper[4813]: I0217 08:59:48.304947 4813 generic.go:334] "Generic (PLEG): container finished" podID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerID="9cfe3a8f152c8092ed1714ff724c7f816478031fc4f05f7885125047c68c1d13" exitCode=0 Feb 17 08:59:48 crc kubenswrapper[4813]: I0217 08:59:48.304674 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5516ae6d-f229-4b34-8510-6036fdc057d1","Type":"ContainerDied","Data":"30903cff83a66aa30bae5d88945b3b2c957f259056a951fbfd8a8f5fa269810f"} Feb 17 08:59:48 crc kubenswrapper[4813]: I0217 08:59:48.305003 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5516ae6d-f229-4b34-8510-6036fdc057d1","Type":"ContainerDied","Data":"b975509cd8f496dbd3f74422c4e8f7a764b0af2f4080efd20e8d0e654ec0b4b5"} Feb 17 08:59:48 crc kubenswrapper[4813]: I0217 08:59:48.305027 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5516ae6d-f229-4b34-8510-6036fdc057d1","Type":"ContainerDied","Data":"9cfe3a8f152c8092ed1714ff724c7f816478031fc4f05f7885125047c68c1d13"} Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.342653 4813 generic.go:334] "Generic (PLEG): container finished" podID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerID="81577fb7bfcbd42b1c6d00a2e4937123f00bbd3885e8a601c1d2dd45798fe6be" exitCode=0 Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.342680 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5516ae6d-f229-4b34-8510-6036fdc057d1","Type":"ContainerDied","Data":"81577fb7bfcbd42b1c6d00a2e4937123f00bbd3885e8a601c1d2dd45798fe6be"} Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.641157 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.786914 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdldg\" (UniqueName: \"kubernetes.io/projected/5516ae6d-f229-4b34-8510-6036fdc057d1-kube-api-access-qdldg\") pod \"5516ae6d-f229-4b34-8510-6036fdc057d1\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.787005 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-sg-core-conf-yaml\") pod \"5516ae6d-f229-4b34-8510-6036fdc057d1\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.787042 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-scripts\") pod \"5516ae6d-f229-4b34-8510-6036fdc057d1\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.787235 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5516ae6d-f229-4b34-8510-6036fdc057d1-log-httpd\") pod \"5516ae6d-f229-4b34-8510-6036fdc057d1\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.787356 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-config-data\") pod \"5516ae6d-f229-4b34-8510-6036fdc057d1\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.787395 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-combined-ca-bundle\") pod \"5516ae6d-f229-4b34-8510-6036fdc057d1\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.787444 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5516ae6d-f229-4b34-8510-6036fdc057d1-run-httpd\") pod \"5516ae6d-f229-4b34-8510-6036fdc057d1\" (UID: \"5516ae6d-f229-4b34-8510-6036fdc057d1\") " Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.788588 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5516ae6d-f229-4b34-8510-6036fdc057d1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5516ae6d-f229-4b34-8510-6036fdc057d1" (UID: "5516ae6d-f229-4b34-8510-6036fdc057d1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.790066 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5516ae6d-f229-4b34-8510-6036fdc057d1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5516ae6d-f229-4b34-8510-6036fdc057d1" (UID: "5516ae6d-f229-4b34-8510-6036fdc057d1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.798625 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-scripts" (OuterVolumeSpecName: "scripts") pod "5516ae6d-f229-4b34-8510-6036fdc057d1" (UID: "5516ae6d-f229-4b34-8510-6036fdc057d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.802566 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5516ae6d-f229-4b34-8510-6036fdc057d1-kube-api-access-qdldg" (OuterVolumeSpecName: "kube-api-access-qdldg") pod "5516ae6d-f229-4b34-8510-6036fdc057d1" (UID: "5516ae6d-f229-4b34-8510-6036fdc057d1"). InnerVolumeSpecName "kube-api-access-qdldg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.820747 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5516ae6d-f229-4b34-8510-6036fdc057d1" (UID: "5516ae6d-f229-4b34-8510-6036fdc057d1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.888976 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5516ae6d-f229-4b34-8510-6036fdc057d1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.889009 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5516ae6d-f229-4b34-8510-6036fdc057d1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.889022 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdldg\" (UniqueName: \"kubernetes.io/projected/5516ae6d-f229-4b34-8510-6036fdc057d1-kube-api-access-qdldg\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.889037 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.889050 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.894630 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-config-data" (OuterVolumeSpecName: "config-data") pod "5516ae6d-f229-4b34-8510-6036fdc057d1" (UID: "5516ae6d-f229-4b34-8510-6036fdc057d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.896250 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5516ae6d-f229-4b34-8510-6036fdc057d1" (UID: "5516ae6d-f229-4b34-8510-6036fdc057d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.991034 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:51 crc kubenswrapper[4813]: I0217 08:59:51.991073 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5516ae6d-f229-4b34-8510-6036fdc057d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.359370 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5516ae6d-f229-4b34-8510-6036fdc057d1","Type":"ContainerDied","Data":"311a9b748297b30319f726cc58af29b6a7cd393831a6b94708c8ff76286ccb72"} Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.359443 4813 scope.go:117] "RemoveContainer" containerID="30903cff83a66aa30bae5d88945b3b2c957f259056a951fbfd8a8f5fa269810f" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.359494 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.387358 4813 scope.go:117] "RemoveContainer" containerID="b975509cd8f496dbd3f74422c4e8f7a764b0af2f4080efd20e8d0e654ec0b4b5" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.421647 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.434467 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.440214 4813 scope.go:117] "RemoveContainer" containerID="81577fb7bfcbd42b1c6d00a2e4937123f00bbd3885e8a601c1d2dd45798fe6be" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.460593 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 08:59:52 crc kubenswrapper[4813]: E0217 08:59:52.464552 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerName="proxy-httpd" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.464615 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerName="proxy-httpd" Feb 17 08:59:52 crc kubenswrapper[4813]: E0217 08:59:52.464660 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerName="sg-core" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.464674 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerName="sg-core" Feb 17 08:59:52 crc kubenswrapper[4813]: E0217 08:59:52.464689 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerName="ceilometer-central-agent" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.464697 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerName="ceilometer-central-agent" Feb 17 08:59:52 crc kubenswrapper[4813]: E0217 08:59:52.464733 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerName="ceilometer-notification-agent" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.464742 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerName="ceilometer-notification-agent" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.466075 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerName="ceilometer-central-agent" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.466112 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerName="ceilometer-notification-agent" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.466132 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerName="proxy-httpd" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.466143 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" containerName="sg-core" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.475612 4813 scope.go:117] "RemoveContainer" containerID="9cfe3a8f152c8092ed1714ff724c7f816478031fc4f05f7885125047c68c1d13" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.483033 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.486525 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.487672 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.490128 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.610382 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf5sj\" (UniqueName: \"kubernetes.io/projected/0e86b9b2-7ce9-424a-8634-f574f186d330-kube-api-access-pf5sj\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.610726 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.610896 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.611055 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-config-data\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.611359 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e86b9b2-7ce9-424a-8634-f574f186d330-run-httpd\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.611733 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-scripts\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.611999 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e86b9b2-7ce9-424a-8634-f574f186d330-log-httpd\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.713429 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf5sj\" (UniqueName: \"kubernetes.io/projected/0e86b9b2-7ce9-424a-8634-f574f186d330-kube-api-access-pf5sj\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.713544 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.713584 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.713630 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-config-data\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.713669 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e86b9b2-7ce9-424a-8634-f574f186d330-run-httpd\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.713777 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-scripts\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.713857 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e86b9b2-7ce9-424a-8634-f574f186d330-log-httpd\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.715010 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e86b9b2-7ce9-424a-8634-f574f186d330-log-httpd\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.715254 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e86b9b2-7ce9-424a-8634-f574f186d330-run-httpd\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.720541 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.721752 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-scripts\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.722653 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-config-data\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.729569 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.750016 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf5sj\" (UniqueName: \"kubernetes.io/projected/0e86b9b2-7ce9-424a-8634-f574f186d330-kube-api-access-pf5sj\") pod \"ceilometer-0\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:52 crc kubenswrapper[4813]: I0217 08:59:52.815354 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:53 crc kubenswrapper[4813]: I0217 08:59:53.124588 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5516ae6d-f229-4b34-8510-6036fdc057d1" path="/var/lib/kubelet/pods/5516ae6d-f229-4b34-8510-6036fdc057d1/volumes" Feb 17 08:59:53 crc kubenswrapper[4813]: W0217 08:59:53.324454 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e86b9b2_7ce9_424a_8634_f574f186d330.slice/crio-b5973a3ddcc44f8f3565d1269f30f3ee0f0132eeac57a8bc146cbd41eb0955dd WatchSource:0}: Error finding container b5973a3ddcc44f8f3565d1269f30f3ee0f0132eeac57a8bc146cbd41eb0955dd: Status 404 returned error can't find the container with id b5973a3ddcc44f8f3565d1269f30f3ee0f0132eeac57a8bc146cbd41eb0955dd Feb 17 08:59:53 crc kubenswrapper[4813]: I0217 08:59:53.331542 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 08:59:53 crc kubenswrapper[4813]: I0217 08:59:53.372788 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e86b9b2-7ce9-424a-8634-f574f186d330","Type":"ContainerStarted","Data":"b5973a3ddcc44f8f3565d1269f30f3ee0f0132eeac57a8bc146cbd41eb0955dd"} Feb 17 08:59:54 crc kubenswrapper[4813]: I0217 08:59:54.397714 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e86b9b2-7ce9-424a-8634-f574f186d330","Type":"ContainerStarted","Data":"292f32c28448c6c14ce48ba64803f36bb682881fae30374feb663e4ebfbf9c83"} Feb 17 08:59:55 crc kubenswrapper[4813]: I0217 08:59:55.409434 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e86b9b2-7ce9-424a-8634-f574f186d330","Type":"ContainerStarted","Data":"12fe75fbd627273321ab8a6a06eef944b4e611116b90f954044f3306331e2372"} Feb 17 08:59:55 crc kubenswrapper[4813]: I0217 08:59:55.409782 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e86b9b2-7ce9-424a-8634-f574f186d330","Type":"ContainerStarted","Data":"f4b6ebb9f0aed14f29312b52ea81a6c873e433d54391a2afee24bc49ad35fa64"} Feb 17 08:59:57 crc kubenswrapper[4813]: I0217 08:59:57.426831 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e86b9b2-7ce9-424a-8634-f574f186d330","Type":"ContainerStarted","Data":"39f1f24215ff6c35ab46e82728b99a4ec206c69bf49b1ac99e9ae1dc0d4d3211"} Feb 17 08:59:57 crc kubenswrapper[4813]: I0217 08:59:57.428038 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 08:59:57 crc kubenswrapper[4813]: I0217 08:59:57.456132 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.457728306 podStartE2EDuration="5.45611667s" podCreationTimestamp="2026-02-17 08:59:52 +0000 UTC" firstStartedPulling="2026-02-17 08:59:53.327381633 +0000 UTC m=+1140.988142856" lastFinishedPulling="2026-02-17 08:59:56.325769987 +0000 UTC m=+1143.986531220" observedRunningTime="2026-02-17 08:59:57.455213024 +0000 UTC m=+1145.115974247" watchObservedRunningTime="2026-02-17 08:59:57.45611667 +0000 UTC m=+1145.116877893" Feb 17 09:00:00 crc kubenswrapper[4813]: I0217 09:00:00.135356 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk"] Feb 17 09:00:00 crc kubenswrapper[4813]: I0217 09:00:00.136495 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk" Feb 17 09:00:00 crc kubenswrapper[4813]: I0217 09:00:00.139047 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 09:00:00 crc kubenswrapper[4813]: I0217 09:00:00.139881 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 09:00:00 crc kubenswrapper[4813]: I0217 09:00:00.152843 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk"] Feb 17 09:00:00 crc kubenswrapper[4813]: I0217 09:00:00.188983 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9eea4efc-c38e-47b7-94a4-c71e94723f42-config-volume\") pod \"collect-profiles-29521980-dsjzk\" (UID: \"9eea4efc-c38e-47b7-94a4-c71e94723f42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk" Feb 17 09:00:00 crc kubenswrapper[4813]: I0217 09:00:00.189040 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-229d2\" (UniqueName: \"kubernetes.io/projected/9eea4efc-c38e-47b7-94a4-c71e94723f42-kube-api-access-229d2\") pod \"collect-profiles-29521980-dsjzk\" (UID: \"9eea4efc-c38e-47b7-94a4-c71e94723f42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk" Feb 17 09:00:00 crc kubenswrapper[4813]: I0217 09:00:00.189149 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9eea4efc-c38e-47b7-94a4-c71e94723f42-secret-volume\") pod \"collect-profiles-29521980-dsjzk\" (UID: \"9eea4efc-c38e-47b7-94a4-c71e94723f42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk" Feb 17 09:00:00 crc kubenswrapper[4813]: I0217 09:00:00.290186 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9eea4efc-c38e-47b7-94a4-c71e94723f42-config-volume\") pod \"collect-profiles-29521980-dsjzk\" (UID: \"9eea4efc-c38e-47b7-94a4-c71e94723f42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk" Feb 17 09:00:00 crc kubenswrapper[4813]: I0217 09:00:00.290233 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-229d2\" (UniqueName: \"kubernetes.io/projected/9eea4efc-c38e-47b7-94a4-c71e94723f42-kube-api-access-229d2\") pod \"collect-profiles-29521980-dsjzk\" (UID: \"9eea4efc-c38e-47b7-94a4-c71e94723f42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk" Feb 17 09:00:00 crc kubenswrapper[4813]: I0217 09:00:00.290317 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9eea4efc-c38e-47b7-94a4-c71e94723f42-secret-volume\") pod \"collect-profiles-29521980-dsjzk\" (UID: \"9eea4efc-c38e-47b7-94a4-c71e94723f42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk" Feb 17 09:00:00 crc kubenswrapper[4813]: I0217 09:00:00.291224 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9eea4efc-c38e-47b7-94a4-c71e94723f42-config-volume\") pod \"collect-profiles-29521980-dsjzk\" (UID: \"9eea4efc-c38e-47b7-94a4-c71e94723f42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk" Feb 17 09:00:00 crc kubenswrapper[4813]: I0217 09:00:00.295697 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9eea4efc-c38e-47b7-94a4-c71e94723f42-secret-volume\") pod \"collect-profiles-29521980-dsjzk\" (UID: \"9eea4efc-c38e-47b7-94a4-c71e94723f42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk" Feb 17 09:00:00 crc kubenswrapper[4813]: I0217 09:00:00.306925 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-229d2\" (UniqueName: \"kubernetes.io/projected/9eea4efc-c38e-47b7-94a4-c71e94723f42-kube-api-access-229d2\") pod \"collect-profiles-29521980-dsjzk\" (UID: \"9eea4efc-c38e-47b7-94a4-c71e94723f42\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk" Feb 17 09:00:00 crc kubenswrapper[4813]: I0217 09:00:00.453567 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk" Feb 17 09:00:00 crc kubenswrapper[4813]: I0217 09:00:00.914131 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk"] Feb 17 09:00:01 crc kubenswrapper[4813]: I0217 09:00:01.472532 4813 generic.go:334] "Generic (PLEG): container finished" podID="9eea4efc-c38e-47b7-94a4-c71e94723f42" containerID="a6b6d07f3eb5fe31f7abf5dd6d27eb051ca001ec9753a0dc91315b17a1e10f5c" exitCode=0 Feb 17 09:00:01 crc kubenswrapper[4813]: I0217 09:00:01.472635 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk" event={"ID":"9eea4efc-c38e-47b7-94a4-c71e94723f42","Type":"ContainerDied","Data":"a6b6d07f3eb5fe31f7abf5dd6d27eb051ca001ec9753a0dc91315b17a1e10f5c"} Feb 17 09:00:01 crc kubenswrapper[4813]: I0217 09:00:01.472904 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk" event={"ID":"9eea4efc-c38e-47b7-94a4-c71e94723f42","Type":"ContainerStarted","Data":"cce3224211fd7af056cd2725395490a0a3038de368cd32db01c4e961af057e1a"} Feb 17 09:00:03 crc kubenswrapper[4813]: I0217 09:00:02.792815 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk" Feb 17 09:00:03 crc kubenswrapper[4813]: I0217 09:00:02.826525 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9eea4efc-c38e-47b7-94a4-c71e94723f42-config-volume\") pod \"9eea4efc-c38e-47b7-94a4-c71e94723f42\" (UID: \"9eea4efc-c38e-47b7-94a4-c71e94723f42\") " Feb 17 09:00:03 crc kubenswrapper[4813]: I0217 09:00:02.826574 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9eea4efc-c38e-47b7-94a4-c71e94723f42-secret-volume\") pod \"9eea4efc-c38e-47b7-94a4-c71e94723f42\" (UID: \"9eea4efc-c38e-47b7-94a4-c71e94723f42\") " Feb 17 09:00:03 crc kubenswrapper[4813]: I0217 09:00:02.826628 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-229d2\" (UniqueName: \"kubernetes.io/projected/9eea4efc-c38e-47b7-94a4-c71e94723f42-kube-api-access-229d2\") pod \"9eea4efc-c38e-47b7-94a4-c71e94723f42\" (UID: \"9eea4efc-c38e-47b7-94a4-c71e94723f42\") " Feb 17 09:00:03 crc kubenswrapper[4813]: I0217 09:00:02.827367 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eea4efc-c38e-47b7-94a4-c71e94723f42-config-volume" (OuterVolumeSpecName: "config-volume") pod "9eea4efc-c38e-47b7-94a4-c71e94723f42" (UID: "9eea4efc-c38e-47b7-94a4-c71e94723f42"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:00:03 crc kubenswrapper[4813]: I0217 09:00:02.831508 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eea4efc-c38e-47b7-94a4-c71e94723f42-kube-api-access-229d2" (OuterVolumeSpecName: "kube-api-access-229d2") pod "9eea4efc-c38e-47b7-94a4-c71e94723f42" (UID: "9eea4efc-c38e-47b7-94a4-c71e94723f42"). InnerVolumeSpecName "kube-api-access-229d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:00:03 crc kubenswrapper[4813]: I0217 09:00:02.838170 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eea4efc-c38e-47b7-94a4-c71e94723f42-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9eea4efc-c38e-47b7-94a4-c71e94723f42" (UID: "9eea4efc-c38e-47b7-94a4-c71e94723f42"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:00:03 crc kubenswrapper[4813]: I0217 09:00:02.928710 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9eea4efc-c38e-47b7-94a4-c71e94723f42-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 09:00:03 crc kubenswrapper[4813]: I0217 09:00:02.928738 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9eea4efc-c38e-47b7-94a4-c71e94723f42-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 09:00:03 crc kubenswrapper[4813]: I0217 09:00:02.928748 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-229d2\" (UniqueName: \"kubernetes.io/projected/9eea4efc-c38e-47b7-94a4-c71e94723f42-kube-api-access-229d2\") on node \"crc\" DevicePath \"\"" Feb 17 09:00:03 crc kubenswrapper[4813]: I0217 09:00:03.489365 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk" event={"ID":"9eea4efc-c38e-47b7-94a4-c71e94723f42","Type":"ContainerDied","Data":"cce3224211fd7af056cd2725395490a0a3038de368cd32db01c4e961af057e1a"} Feb 17 09:00:03 crc kubenswrapper[4813]: I0217 09:00:03.489801 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cce3224211fd7af056cd2725395490a0a3038de368cd32db01c4e961af057e1a" Feb 17 09:00:03 crc kubenswrapper[4813]: I0217 09:00:03.489476 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521980-dsjzk" Feb 17 09:00:05 crc kubenswrapper[4813]: I0217 09:00:05.165814 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 09:00:05 crc kubenswrapper[4813]: I0217 09:00:05.165908 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 09:00:05 crc kubenswrapper[4813]: I0217 09:00:05.165983 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 09:00:05 crc kubenswrapper[4813]: I0217 09:00:05.166938 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4284fb46e7e232957891008abfaed8819255b12cf7fd236cbf3602b7e1a318a"} pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 09:00:05 crc kubenswrapper[4813]: I0217 09:00:05.167056 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" containerID="cri-o://e4284fb46e7e232957891008abfaed8819255b12cf7fd236cbf3602b7e1a318a" gracePeriod=600 Feb 17 09:00:05 crc kubenswrapper[4813]: I0217 09:00:05.514576 4813 generic.go:334] "Generic (PLEG): container finished" podID="3a6ba827-b08b-4163-b067-d9adb119398d" containerID="e4284fb46e7e232957891008abfaed8819255b12cf7fd236cbf3602b7e1a318a" exitCode=0 Feb 17 09:00:05 crc kubenswrapper[4813]: I0217 09:00:05.514746 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerDied","Data":"e4284fb46e7e232957891008abfaed8819255b12cf7fd236cbf3602b7e1a318a"} Feb 17 09:00:05 crc kubenswrapper[4813]: I0217 09:00:05.515044 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerStarted","Data":"e8f1831aa866d7234a4f9752273e3fc18af2abeede207be480bb974f39d90c1d"} Feb 17 09:00:05 crc kubenswrapper[4813]: I0217 09:00:05.515075 4813 scope.go:117] "RemoveContainer" containerID="7212b4b532e53d852c5e6fbe5aa59b96599c01899ec81be229b35b10904557df" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.205724 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.521242 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstackclient"] Feb 17 09:00:11 crc kubenswrapper[4813]: E0217 09:00:11.522106 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eea4efc-c38e-47b7-94a4-c71e94723f42" containerName="collect-profiles" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.522125 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eea4efc-c38e-47b7-94a4-c71e94723f42" containerName="collect-profiles" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.522294 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eea4efc-c38e-47b7-94a4-c71e94723f42" containerName="collect-profiles" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.522791 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.527678 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-config-secret" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.527772 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstackclient-openstackclient-dockercfg-gn4wq" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.535227 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.543699 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.669157 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e915b03-8f70-4532-87be-7bdc51d20ae5-openstack-config-secret\") pod \"openstackclient\" (UID: \"7e915b03-8f70-4532-87be-7bdc51d20ae5\") " pod="watcher-kuttl-default/openstackclient" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.669260 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e915b03-8f70-4532-87be-7bdc51d20ae5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7e915b03-8f70-4532-87be-7bdc51d20ae5\") " pod="watcher-kuttl-default/openstackclient" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.669282 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8vd8\" (UniqueName: \"kubernetes.io/projected/7e915b03-8f70-4532-87be-7bdc51d20ae5-kube-api-access-s8vd8\") pod \"openstackclient\" (UID: \"7e915b03-8f70-4532-87be-7bdc51d20ae5\") " pod="watcher-kuttl-default/openstackclient" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.669333 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e915b03-8f70-4532-87be-7bdc51d20ae5-openstack-config\") pod \"openstackclient\" (UID: \"7e915b03-8f70-4532-87be-7bdc51d20ae5\") " pod="watcher-kuttl-default/openstackclient" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.770990 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e915b03-8f70-4532-87be-7bdc51d20ae5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7e915b03-8f70-4532-87be-7bdc51d20ae5\") " pod="watcher-kuttl-default/openstackclient" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.771027 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8vd8\" (UniqueName: \"kubernetes.io/projected/7e915b03-8f70-4532-87be-7bdc51d20ae5-kube-api-access-s8vd8\") pod \"openstackclient\" (UID: \"7e915b03-8f70-4532-87be-7bdc51d20ae5\") " pod="watcher-kuttl-default/openstackclient" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.771046 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e915b03-8f70-4532-87be-7bdc51d20ae5-openstack-config\") pod \"openstackclient\" (UID: \"7e915b03-8f70-4532-87be-7bdc51d20ae5\") " pod="watcher-kuttl-default/openstackclient" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.771110 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e915b03-8f70-4532-87be-7bdc51d20ae5-openstack-config-secret\") pod \"openstackclient\" (UID: \"7e915b03-8f70-4532-87be-7bdc51d20ae5\") " pod="watcher-kuttl-default/openstackclient" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.772085 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7e915b03-8f70-4532-87be-7bdc51d20ae5-openstack-config\") pod \"openstackclient\" (UID: \"7e915b03-8f70-4532-87be-7bdc51d20ae5\") " pod="watcher-kuttl-default/openstackclient" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.778905 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7e915b03-8f70-4532-87be-7bdc51d20ae5-openstack-config-secret\") pod \"openstackclient\" (UID: \"7e915b03-8f70-4532-87be-7bdc51d20ae5\") " pod="watcher-kuttl-default/openstackclient" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.779031 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e915b03-8f70-4532-87be-7bdc51d20ae5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7e915b03-8f70-4532-87be-7bdc51d20ae5\") " pod="watcher-kuttl-default/openstackclient" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.802016 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8vd8\" (UniqueName: \"kubernetes.io/projected/7e915b03-8f70-4532-87be-7bdc51d20ae5-kube-api-access-s8vd8\") pod \"openstackclient\" (UID: \"7e915b03-8f70-4532-87be-7bdc51d20ae5\") " pod="watcher-kuttl-default/openstackclient" Feb 17 09:00:11 crc kubenswrapper[4813]: I0217 09:00:11.855263 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Feb 17 09:00:12 crc kubenswrapper[4813]: I0217 09:00:12.302225 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Feb 17 09:00:12 crc kubenswrapper[4813]: I0217 09:00:12.601990 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"7e915b03-8f70-4532-87be-7bdc51d20ae5","Type":"ContainerStarted","Data":"a9b74f9948227ce11304602b7c0f335f852a79207af4fa28aaa43a679d6d66c9"} Feb 17 09:00:22 crc kubenswrapper[4813]: I0217 09:00:22.688451 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"7e915b03-8f70-4532-87be-7bdc51d20ae5","Type":"ContainerStarted","Data":"9ecd58a2f3cac223863b61e1ddd20194759e8e9535af4799324af795100dc5ba"} Feb 17 09:00:22 crc kubenswrapper[4813]: I0217 09:00:22.716231 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstackclient" podStartSLOduration=2.521056828 podStartE2EDuration="11.716201996s" podCreationTimestamp="2026-02-17 09:00:11 +0000 UTC" firstStartedPulling="2026-02-17 09:00:12.310804886 +0000 UTC m=+1159.971566109" lastFinishedPulling="2026-02-17 09:00:21.505950044 +0000 UTC m=+1169.166711277" observedRunningTime="2026-02-17 09:00:22.704115152 +0000 UTC m=+1170.364876375" watchObservedRunningTime="2026-02-17 09:00:22.716201996 +0000 UTC m=+1170.376963249" Feb 17 09:00:22 crc kubenswrapper[4813]: I0217 09:00:22.822230 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:24 crc kubenswrapper[4813]: I0217 09:00:24.304160 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 17 09:00:24 crc kubenswrapper[4813]: I0217 09:00:24.304729 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="a1969f4f-4612-43d8-bd57-6be840c9d815" containerName="kube-state-metrics" containerID="cri-o://34e4bbc6c62eb618eb95c3966aa0a057ce4ca1b60b558d976e59b34e43cd8860" gracePeriod=30 Feb 17 09:00:24 crc kubenswrapper[4813]: I0217 09:00:24.705231 4813 generic.go:334] "Generic (PLEG): container finished" podID="a1969f4f-4612-43d8-bd57-6be840c9d815" containerID="34e4bbc6c62eb618eb95c3966aa0a057ce4ca1b60b558d976e59b34e43cd8860" exitCode=2 Feb 17 09:00:24 crc kubenswrapper[4813]: I0217 09:00:24.705341 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"a1969f4f-4612-43d8-bd57-6be840c9d815","Type":"ContainerDied","Data":"34e4bbc6c62eb618eb95c3966aa0a057ce4ca1b60b558d976e59b34e43cd8860"} Feb 17 09:00:24 crc kubenswrapper[4813]: I0217 09:00:24.705517 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"a1969f4f-4612-43d8-bd57-6be840c9d815","Type":"ContainerDied","Data":"f7e7bb6f58298d86802c1728d414a800dca19892edef017a9ccc20bcb3c67d99"} Feb 17 09:00:24 crc kubenswrapper[4813]: I0217 09:00:24.705533 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7e7bb6f58298d86802c1728d414a800dca19892edef017a9ccc20bcb3c67d99" Feb 17 09:00:24 crc kubenswrapper[4813]: I0217 09:00:24.748553 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:24 crc kubenswrapper[4813]: I0217 09:00:24.890510 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvgtl\" (UniqueName: \"kubernetes.io/projected/a1969f4f-4612-43d8-bd57-6be840c9d815-kube-api-access-wvgtl\") pod \"a1969f4f-4612-43d8-bd57-6be840c9d815\" (UID: \"a1969f4f-4612-43d8-bd57-6be840c9d815\") " Feb 17 09:00:24 crc kubenswrapper[4813]: I0217 09:00:24.895788 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1969f4f-4612-43d8-bd57-6be840c9d815-kube-api-access-wvgtl" (OuterVolumeSpecName: "kube-api-access-wvgtl") pod "a1969f4f-4612-43d8-bd57-6be840c9d815" (UID: "a1969f4f-4612-43d8-bd57-6be840c9d815"). InnerVolumeSpecName "kube-api-access-wvgtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:00:24 crc kubenswrapper[4813]: I0217 09:00:24.994380 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvgtl\" (UniqueName: \"kubernetes.io/projected/a1969f4f-4612-43d8-bd57-6be840c9d815-kube-api-access-wvgtl\") on node \"crc\" DevicePath \"\"" Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.307115 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.307422 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerName="ceilometer-central-agent" containerID="cri-o://292f32c28448c6c14ce48ba64803f36bb682881fae30374feb663e4ebfbf9c83" gracePeriod=30 Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.307493 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerName="sg-core" containerID="cri-o://12fe75fbd627273321ab8a6a06eef944b4e611116b90f954044f3306331e2372" gracePeriod=30 Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.307598 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerName="ceilometer-notification-agent" containerID="cri-o://f4b6ebb9f0aed14f29312b52ea81a6c873e433d54391a2afee24bc49ad35fa64" gracePeriod=30 Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.307493 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerName="proxy-httpd" containerID="cri-o://39f1f24215ff6c35ab46e82728b99a4ec206c69bf49b1ac99e9ae1dc0d4d3211" gracePeriod=30 Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.740802 4813 generic.go:334] "Generic (PLEG): container finished" podID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerID="39f1f24215ff6c35ab46e82728b99a4ec206c69bf49b1ac99e9ae1dc0d4d3211" exitCode=0 Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.741069 4813 generic.go:334] "Generic (PLEG): container finished" podID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerID="12fe75fbd627273321ab8a6a06eef944b4e611116b90f954044f3306331e2372" exitCode=2 Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.741144 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.740896 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e86b9b2-7ce9-424a-8634-f574f186d330","Type":"ContainerDied","Data":"39f1f24215ff6c35ab46e82728b99a4ec206c69bf49b1ac99e9ae1dc0d4d3211"} Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.741779 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e86b9b2-7ce9-424a-8634-f574f186d330","Type":"ContainerDied","Data":"12fe75fbd627273321ab8a6a06eef944b4e611116b90f954044f3306331e2372"} Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.783223 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.791367 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.807264 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 17 09:00:25 crc kubenswrapper[4813]: E0217 09:00:25.807648 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1969f4f-4612-43d8-bd57-6be840c9d815" containerName="kube-state-metrics" Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.807672 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1969f4f-4612-43d8-bd57-6be840c9d815" containerName="kube-state-metrics" Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.807842 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1969f4f-4612-43d8-bd57-6be840c9d815" containerName="kube-state-metrics" Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.808517 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.810216 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-kube-state-metrics-svc" Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.812664 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"kube-state-metrics-tls-config" Feb 17 09:00:25 crc kubenswrapper[4813]: I0217 09:00:25.817454 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 17 09:00:26 crc kubenswrapper[4813]: I0217 09:00:26.008911 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fdf3a6-6845-4466-b703-5acec8528f28-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"41fdf3a6-6845-4466-b703-5acec8528f28\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:26 crc kubenswrapper[4813]: I0217 09:00:26.009008 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fdf3a6-6845-4466-b703-5acec8528f28-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"41fdf3a6-6845-4466-b703-5acec8528f28\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:26 crc kubenswrapper[4813]: I0217 09:00:26.009096 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49wdf\" (UniqueName: \"kubernetes.io/projected/41fdf3a6-6845-4466-b703-5acec8528f28-kube-api-access-49wdf\") pod \"kube-state-metrics-0\" (UID: \"41fdf3a6-6845-4466-b703-5acec8528f28\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:26 crc kubenswrapper[4813]: I0217 09:00:26.009432 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/41fdf3a6-6845-4466-b703-5acec8528f28-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"41fdf3a6-6845-4466-b703-5acec8528f28\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:26 crc kubenswrapper[4813]: I0217 09:00:26.111449 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fdf3a6-6845-4466-b703-5acec8528f28-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"41fdf3a6-6845-4466-b703-5acec8528f28\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:26 crc kubenswrapper[4813]: I0217 09:00:26.111578 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49wdf\" (UniqueName: \"kubernetes.io/projected/41fdf3a6-6845-4466-b703-5acec8528f28-kube-api-access-49wdf\") pod \"kube-state-metrics-0\" (UID: \"41fdf3a6-6845-4466-b703-5acec8528f28\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:26 crc kubenswrapper[4813]: I0217 09:00:26.111663 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/41fdf3a6-6845-4466-b703-5acec8528f28-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"41fdf3a6-6845-4466-b703-5acec8528f28\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:26 crc kubenswrapper[4813]: I0217 09:00:26.111755 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fdf3a6-6845-4466-b703-5acec8528f28-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"41fdf3a6-6845-4466-b703-5acec8528f28\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:26 crc kubenswrapper[4813]: I0217 09:00:26.116989 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/41fdf3a6-6845-4466-b703-5acec8528f28-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"41fdf3a6-6845-4466-b703-5acec8528f28\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:26 crc kubenswrapper[4813]: I0217 09:00:26.118086 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fdf3a6-6845-4466-b703-5acec8528f28-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"41fdf3a6-6845-4466-b703-5acec8528f28\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:26 crc kubenswrapper[4813]: I0217 09:00:26.121137 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fdf3a6-6845-4466-b703-5acec8528f28-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"41fdf3a6-6845-4466-b703-5acec8528f28\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:26 crc kubenswrapper[4813]: I0217 09:00:26.148345 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49wdf\" (UniqueName: \"kubernetes.io/projected/41fdf3a6-6845-4466-b703-5acec8528f28-kube-api-access-49wdf\") pod \"kube-state-metrics-0\" (UID: \"41fdf3a6-6845-4466-b703-5acec8528f28\") " pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:26 crc kubenswrapper[4813]: I0217 09:00:26.424171 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:26 crc kubenswrapper[4813]: I0217 09:00:26.755026 4813 generic.go:334] "Generic (PLEG): container finished" podID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerID="292f32c28448c6c14ce48ba64803f36bb682881fae30374feb663e4ebfbf9c83" exitCode=0 Feb 17 09:00:26 crc kubenswrapper[4813]: I0217 09:00:26.755078 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e86b9b2-7ce9-424a-8634-f574f186d330","Type":"ContainerDied","Data":"292f32c28448c6c14ce48ba64803f36bb682881fae30374feb663e4ebfbf9c83"} Feb 17 09:00:26 crc kubenswrapper[4813]: I0217 09:00:26.880380 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.122590 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1969f4f-4612-43d8-bd57-6be840c9d815" path="/var/lib/kubelet/pods/a1969f4f-4612-43d8-bd57-6be840c9d815/volumes" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.590060 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg"] Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.591788 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.594036 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.598944 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-nlj76"] Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.600015 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-nlj76" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.606574 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg"] Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.628952 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-nlj76"] Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.736505 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7kvv\" (UniqueName: \"kubernetes.io/projected/ff3f839c-ec07-474e-96a4-7e7e3623584d-kube-api-access-z7kvv\") pod \"watcher-1ec2-account-create-update-xw8jg\" (UID: \"ff3f839c-ec07-474e-96a4-7e7e3623584d\") " pod="watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.736555 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4b0dc5e-424c-4953-b273-62849bf61f9a-operator-scripts\") pod \"watcher-db-create-nlj76\" (UID: \"d4b0dc5e-424c-4953-b273-62849bf61f9a\") " pod="watcher-kuttl-default/watcher-db-create-nlj76" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.736581 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff3f839c-ec07-474e-96a4-7e7e3623584d-operator-scripts\") pod \"watcher-1ec2-account-create-update-xw8jg\" (UID: \"ff3f839c-ec07-474e-96a4-7e7e3623584d\") " pod="watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.736725 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncpqt\" (UniqueName: \"kubernetes.io/projected/d4b0dc5e-424c-4953-b273-62849bf61f9a-kube-api-access-ncpqt\") pod \"watcher-db-create-nlj76\" (UID: \"d4b0dc5e-424c-4953-b273-62849bf61f9a\") " pod="watcher-kuttl-default/watcher-db-create-nlj76" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.763920 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"41fdf3a6-6845-4466-b703-5acec8528f28","Type":"ContainerStarted","Data":"0ae2edbd8583c0ef99ea06cb6f56134b2cc0ce3f7618a951f91b4da2aad9c122"} Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.763971 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"41fdf3a6-6845-4466-b703-5acec8528f28","Type":"ContainerStarted","Data":"6321a4b47aaf9ce0ec1147f8afb7d00780660d86272e84fbdabe862874ddd8cf"} Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.764079 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.838798 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7kvv\" (UniqueName: \"kubernetes.io/projected/ff3f839c-ec07-474e-96a4-7e7e3623584d-kube-api-access-z7kvv\") pod \"watcher-1ec2-account-create-update-xw8jg\" (UID: \"ff3f839c-ec07-474e-96a4-7e7e3623584d\") " pod="watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.838842 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4b0dc5e-424c-4953-b273-62849bf61f9a-operator-scripts\") pod \"watcher-db-create-nlj76\" (UID: \"d4b0dc5e-424c-4953-b273-62849bf61f9a\") " pod="watcher-kuttl-default/watcher-db-create-nlj76" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.838861 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff3f839c-ec07-474e-96a4-7e7e3623584d-operator-scripts\") pod \"watcher-1ec2-account-create-update-xw8jg\" (UID: \"ff3f839c-ec07-474e-96a4-7e7e3623584d\") " pod="watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.838897 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncpqt\" (UniqueName: \"kubernetes.io/projected/d4b0dc5e-424c-4953-b273-62849bf61f9a-kube-api-access-ncpqt\") pod \"watcher-db-create-nlj76\" (UID: \"d4b0dc5e-424c-4953-b273-62849bf61f9a\") " pod="watcher-kuttl-default/watcher-db-create-nlj76" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.839927 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4b0dc5e-424c-4953-b273-62849bf61f9a-operator-scripts\") pod \"watcher-db-create-nlj76\" (UID: \"d4b0dc5e-424c-4953-b273-62849bf61f9a\") " pod="watcher-kuttl-default/watcher-db-create-nlj76" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.839945 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff3f839c-ec07-474e-96a4-7e7e3623584d-operator-scripts\") pod \"watcher-1ec2-account-create-update-xw8jg\" (UID: \"ff3f839c-ec07-474e-96a4-7e7e3623584d\") " pod="watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.857090 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncpqt\" (UniqueName: \"kubernetes.io/projected/d4b0dc5e-424c-4953-b273-62849bf61f9a-kube-api-access-ncpqt\") pod \"watcher-db-create-nlj76\" (UID: \"d4b0dc5e-424c-4953-b273-62849bf61f9a\") " pod="watcher-kuttl-default/watcher-db-create-nlj76" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.857805 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7kvv\" (UniqueName: \"kubernetes.io/projected/ff3f839c-ec07-474e-96a4-7e7e3623584d-kube-api-access-z7kvv\") pod \"watcher-1ec2-account-create-update-xw8jg\" (UID: \"ff3f839c-ec07-474e-96a4-7e7e3623584d\") " pod="watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.914331 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg" Feb 17 09:00:27 crc kubenswrapper[4813]: I0217 09:00:27.923382 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-nlj76" Feb 17 09:00:28 crc kubenswrapper[4813]: I0217 09:00:28.348860 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=2.9300416350000003 podStartE2EDuration="3.348843113s" podCreationTimestamp="2026-02-17 09:00:25 +0000 UTC" firstStartedPulling="2026-02-17 09:00:26.889940245 +0000 UTC m=+1174.550701468" lastFinishedPulling="2026-02-17 09:00:27.308741723 +0000 UTC m=+1174.969502946" observedRunningTime="2026-02-17 09:00:27.785894432 +0000 UTC m=+1175.446655655" watchObservedRunningTime="2026-02-17 09:00:28.348843113 +0000 UTC m=+1176.009604336" Feb 17 09:00:28 crc kubenswrapper[4813]: I0217 09:00:28.349912 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg"] Feb 17 09:00:28 crc kubenswrapper[4813]: W0217 09:00:28.352030 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff3f839c_ec07_474e_96a4_7e7e3623584d.slice/crio-e04ecddb1749cd3af6a40a96286b4226d8a69fd1fbfc8862cb4863265cfb5d94 WatchSource:0}: Error finding container e04ecddb1749cd3af6a40a96286b4226d8a69fd1fbfc8862cb4863265cfb5d94: Status 404 returned error can't find the container with id e04ecddb1749cd3af6a40a96286b4226d8a69fd1fbfc8862cb4863265cfb5d94 Feb 17 09:00:28 crc kubenswrapper[4813]: W0217 09:00:28.407027 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4b0dc5e_424c_4953_b273_62849bf61f9a.slice/crio-3600209146b5b0facf70e86bea28dc551174175807d3b7e9902ecedc84011e0a WatchSource:0}: Error finding container 3600209146b5b0facf70e86bea28dc551174175807d3b7e9902ecedc84011e0a: Status 404 returned error can't find the container with id 3600209146b5b0facf70e86bea28dc551174175807d3b7e9902ecedc84011e0a Feb 17 09:00:28 crc kubenswrapper[4813]: I0217 09:00:28.408229 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-nlj76"] Feb 17 09:00:28 crc kubenswrapper[4813]: I0217 09:00:28.782627 4813 generic.go:334] "Generic (PLEG): container finished" podID="d4b0dc5e-424c-4953-b273-62849bf61f9a" containerID="5752643aefbf275d9c522e56646e9b3bd85cc7b1488cede4c89d7ade8d252022" exitCode=0 Feb 17 09:00:28 crc kubenswrapper[4813]: I0217 09:00:28.782750 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-nlj76" event={"ID":"d4b0dc5e-424c-4953-b273-62849bf61f9a","Type":"ContainerDied","Data":"5752643aefbf275d9c522e56646e9b3bd85cc7b1488cede4c89d7ade8d252022"} Feb 17 09:00:28 crc kubenswrapper[4813]: I0217 09:00:28.782997 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-nlj76" event={"ID":"d4b0dc5e-424c-4953-b273-62849bf61f9a","Type":"ContainerStarted","Data":"3600209146b5b0facf70e86bea28dc551174175807d3b7e9902ecedc84011e0a"} Feb 17 09:00:28 crc kubenswrapper[4813]: I0217 09:00:28.785118 4813 generic.go:334] "Generic (PLEG): container finished" podID="ff3f839c-ec07-474e-96a4-7e7e3623584d" containerID="7b7faa8cad4885c4bb81bbe2f67b28008a8cc82c91cedfc3202b95bbc2859418" exitCode=0 Feb 17 09:00:28 crc kubenswrapper[4813]: I0217 09:00:28.785175 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg" event={"ID":"ff3f839c-ec07-474e-96a4-7e7e3623584d","Type":"ContainerDied","Data":"7b7faa8cad4885c4bb81bbe2f67b28008a8cc82c91cedfc3202b95bbc2859418"} Feb 17 09:00:28 crc kubenswrapper[4813]: I0217 09:00:28.785202 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg" event={"ID":"ff3f839c-ec07-474e-96a4-7e7e3623584d","Type":"ContainerStarted","Data":"e04ecddb1749cd3af6a40a96286b4226d8a69fd1fbfc8862cb4863265cfb5d94"} Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.197741 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-nlj76" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.207473 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.380682 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff3f839c-ec07-474e-96a4-7e7e3623584d-operator-scripts\") pod \"ff3f839c-ec07-474e-96a4-7e7e3623584d\" (UID: \"ff3f839c-ec07-474e-96a4-7e7e3623584d\") " Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.380827 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7kvv\" (UniqueName: \"kubernetes.io/projected/ff3f839c-ec07-474e-96a4-7e7e3623584d-kube-api-access-z7kvv\") pod \"ff3f839c-ec07-474e-96a4-7e7e3623584d\" (UID: \"ff3f839c-ec07-474e-96a4-7e7e3623584d\") " Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.380887 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4b0dc5e-424c-4953-b273-62849bf61f9a-operator-scripts\") pod \"d4b0dc5e-424c-4953-b273-62849bf61f9a\" (UID: \"d4b0dc5e-424c-4953-b273-62849bf61f9a\") " Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.380943 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncpqt\" (UniqueName: \"kubernetes.io/projected/d4b0dc5e-424c-4953-b273-62849bf61f9a-kube-api-access-ncpqt\") pod \"d4b0dc5e-424c-4953-b273-62849bf61f9a\" (UID: \"d4b0dc5e-424c-4953-b273-62849bf61f9a\") " Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.381630 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff3f839c-ec07-474e-96a4-7e7e3623584d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff3f839c-ec07-474e-96a4-7e7e3623584d" (UID: "ff3f839c-ec07-474e-96a4-7e7e3623584d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.381914 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4b0dc5e-424c-4953-b273-62849bf61f9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4b0dc5e-424c-4953-b273-62849bf61f9a" (UID: "d4b0dc5e-424c-4953-b273-62849bf61f9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.405951 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b0dc5e-424c-4953-b273-62849bf61f9a-kube-api-access-ncpqt" (OuterVolumeSpecName: "kube-api-access-ncpqt") pod "d4b0dc5e-424c-4953-b273-62849bf61f9a" (UID: "d4b0dc5e-424c-4953-b273-62849bf61f9a"). InnerVolumeSpecName "kube-api-access-ncpqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.416903 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff3f839c-ec07-474e-96a4-7e7e3623584d-kube-api-access-z7kvv" (OuterVolumeSpecName: "kube-api-access-z7kvv") pod "ff3f839c-ec07-474e-96a4-7e7e3623584d" (UID: "ff3f839c-ec07-474e-96a4-7e7e3623584d"). InnerVolumeSpecName "kube-api-access-z7kvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.482562 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff3f839c-ec07-474e-96a4-7e7e3623584d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.482597 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7kvv\" (UniqueName: \"kubernetes.io/projected/ff3f839c-ec07-474e-96a4-7e7e3623584d-kube-api-access-z7kvv\") on node \"crc\" DevicePath \"\"" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.482611 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4b0dc5e-424c-4953-b273-62849bf61f9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.482621 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncpqt\" (UniqueName: \"kubernetes.io/projected/d4b0dc5e-424c-4953-b273-62849bf61f9a-kube-api-access-ncpqt\") on node \"crc\" DevicePath \"\"" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.732104 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.805541 4813 generic.go:334] "Generic (PLEG): container finished" podID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerID="f4b6ebb9f0aed14f29312b52ea81a6c873e433d54391a2afee24bc49ad35fa64" exitCode=0 Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.805611 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.805624 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e86b9b2-7ce9-424a-8634-f574f186d330","Type":"ContainerDied","Data":"f4b6ebb9f0aed14f29312b52ea81a6c873e433d54391a2afee24bc49ad35fa64"} Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.805650 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e86b9b2-7ce9-424a-8634-f574f186d330","Type":"ContainerDied","Data":"b5973a3ddcc44f8f3565d1269f30f3ee0f0132eeac57a8bc146cbd41eb0955dd"} Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.805670 4813 scope.go:117] "RemoveContainer" containerID="39f1f24215ff6c35ab46e82728b99a4ec206c69bf49b1ac99e9ae1dc0d4d3211" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.807254 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-nlj76" event={"ID":"d4b0dc5e-424c-4953-b273-62849bf61f9a","Type":"ContainerDied","Data":"3600209146b5b0facf70e86bea28dc551174175807d3b7e9902ecedc84011e0a"} Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.807281 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3600209146b5b0facf70e86bea28dc551174175807d3b7e9902ecedc84011e0a" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.807342 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-nlj76" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.810696 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg" event={"ID":"ff3f839c-ec07-474e-96a4-7e7e3623584d","Type":"ContainerDied","Data":"e04ecddb1749cd3af6a40a96286b4226d8a69fd1fbfc8862cb4863265cfb5d94"} Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.810736 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e04ecddb1749cd3af6a40a96286b4226d8a69fd1fbfc8862cb4863265cfb5d94" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.810791 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.829826 4813 scope.go:117] "RemoveContainer" containerID="12fe75fbd627273321ab8a6a06eef944b4e611116b90f954044f3306331e2372" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.853905 4813 scope.go:117] "RemoveContainer" containerID="f4b6ebb9f0aed14f29312b52ea81a6c873e433d54391a2afee24bc49ad35fa64" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.872245 4813 scope.go:117] "RemoveContainer" containerID="292f32c28448c6c14ce48ba64803f36bb682881fae30374feb663e4ebfbf9c83" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.888277 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-config-data\") pod \"0e86b9b2-7ce9-424a-8634-f574f186d330\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.888878 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-sg-core-conf-yaml\") pod \"0e86b9b2-7ce9-424a-8634-f574f186d330\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.890554 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e86b9b2-7ce9-424a-8634-f574f186d330-log-httpd\") pod \"0e86b9b2-7ce9-424a-8634-f574f186d330\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.890737 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e86b9b2-7ce9-424a-8634-f574f186d330-run-httpd\") pod \"0e86b9b2-7ce9-424a-8634-f574f186d330\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.890838 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-combined-ca-bundle\") pod \"0e86b9b2-7ce9-424a-8634-f574f186d330\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.890945 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf5sj\" (UniqueName: \"kubernetes.io/projected/0e86b9b2-7ce9-424a-8634-f574f186d330-kube-api-access-pf5sj\") pod \"0e86b9b2-7ce9-424a-8634-f574f186d330\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.891522 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-scripts\") pod \"0e86b9b2-7ce9-424a-8634-f574f186d330\" (UID: \"0e86b9b2-7ce9-424a-8634-f574f186d330\") " Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.892023 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e86b9b2-7ce9-424a-8634-f574f186d330-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0e86b9b2-7ce9-424a-8634-f574f186d330" (UID: "0e86b9b2-7ce9-424a-8634-f574f186d330"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.893264 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e86b9b2-7ce9-424a-8634-f574f186d330-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.895768 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e86b9b2-7ce9-424a-8634-f574f186d330-kube-api-access-pf5sj" (OuterVolumeSpecName: "kube-api-access-pf5sj") pod "0e86b9b2-7ce9-424a-8634-f574f186d330" (UID: "0e86b9b2-7ce9-424a-8634-f574f186d330"). InnerVolumeSpecName "kube-api-access-pf5sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.896675 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-scripts" (OuterVolumeSpecName: "scripts") pod "0e86b9b2-7ce9-424a-8634-f574f186d330" (UID: "0e86b9b2-7ce9-424a-8634-f574f186d330"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.898777 4813 scope.go:117] "RemoveContainer" containerID="39f1f24215ff6c35ab46e82728b99a4ec206c69bf49b1ac99e9ae1dc0d4d3211" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.899543 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e86b9b2-7ce9-424a-8634-f574f186d330-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0e86b9b2-7ce9-424a-8634-f574f186d330" (UID: "0e86b9b2-7ce9-424a-8634-f574f186d330"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:00:30 crc kubenswrapper[4813]: E0217 09:00:30.900732 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39f1f24215ff6c35ab46e82728b99a4ec206c69bf49b1ac99e9ae1dc0d4d3211\": container with ID starting with 39f1f24215ff6c35ab46e82728b99a4ec206c69bf49b1ac99e9ae1dc0d4d3211 not found: ID does not exist" containerID="39f1f24215ff6c35ab46e82728b99a4ec206c69bf49b1ac99e9ae1dc0d4d3211" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.900860 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f1f24215ff6c35ab46e82728b99a4ec206c69bf49b1ac99e9ae1dc0d4d3211"} err="failed to get container status \"39f1f24215ff6c35ab46e82728b99a4ec206c69bf49b1ac99e9ae1dc0d4d3211\": rpc error: code = NotFound desc = could not find container \"39f1f24215ff6c35ab46e82728b99a4ec206c69bf49b1ac99e9ae1dc0d4d3211\": container with ID starting with 39f1f24215ff6c35ab46e82728b99a4ec206c69bf49b1ac99e9ae1dc0d4d3211 not found: ID does not exist" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.900936 4813 scope.go:117] "RemoveContainer" containerID="12fe75fbd627273321ab8a6a06eef944b4e611116b90f954044f3306331e2372" Feb 17 09:00:30 crc kubenswrapper[4813]: E0217 09:00:30.901429 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12fe75fbd627273321ab8a6a06eef944b4e611116b90f954044f3306331e2372\": container with ID starting with 12fe75fbd627273321ab8a6a06eef944b4e611116b90f954044f3306331e2372 not found: ID does not exist" containerID="12fe75fbd627273321ab8a6a06eef944b4e611116b90f954044f3306331e2372" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.901469 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12fe75fbd627273321ab8a6a06eef944b4e611116b90f954044f3306331e2372"} err="failed to get container status \"12fe75fbd627273321ab8a6a06eef944b4e611116b90f954044f3306331e2372\": rpc error: code = NotFound desc = could not find container \"12fe75fbd627273321ab8a6a06eef944b4e611116b90f954044f3306331e2372\": container with ID starting with 12fe75fbd627273321ab8a6a06eef944b4e611116b90f954044f3306331e2372 not found: ID does not exist" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.901494 4813 scope.go:117] "RemoveContainer" containerID="f4b6ebb9f0aed14f29312b52ea81a6c873e433d54391a2afee24bc49ad35fa64" Feb 17 09:00:30 crc kubenswrapper[4813]: E0217 09:00:30.901869 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4b6ebb9f0aed14f29312b52ea81a6c873e433d54391a2afee24bc49ad35fa64\": container with ID starting with f4b6ebb9f0aed14f29312b52ea81a6c873e433d54391a2afee24bc49ad35fa64 not found: ID does not exist" containerID="f4b6ebb9f0aed14f29312b52ea81a6c873e433d54391a2afee24bc49ad35fa64" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.901918 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4b6ebb9f0aed14f29312b52ea81a6c873e433d54391a2afee24bc49ad35fa64"} err="failed to get container status \"f4b6ebb9f0aed14f29312b52ea81a6c873e433d54391a2afee24bc49ad35fa64\": rpc error: code = NotFound desc = could not find container \"f4b6ebb9f0aed14f29312b52ea81a6c873e433d54391a2afee24bc49ad35fa64\": container with ID starting with f4b6ebb9f0aed14f29312b52ea81a6c873e433d54391a2afee24bc49ad35fa64 not found: ID does not exist" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.902009 4813 scope.go:117] "RemoveContainer" containerID="292f32c28448c6c14ce48ba64803f36bb682881fae30374feb663e4ebfbf9c83" Feb 17 09:00:30 crc kubenswrapper[4813]: E0217 09:00:30.902484 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"292f32c28448c6c14ce48ba64803f36bb682881fae30374feb663e4ebfbf9c83\": container with ID starting with 292f32c28448c6c14ce48ba64803f36bb682881fae30374feb663e4ebfbf9c83 not found: ID does not exist" containerID="292f32c28448c6c14ce48ba64803f36bb682881fae30374feb663e4ebfbf9c83" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.902558 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"292f32c28448c6c14ce48ba64803f36bb682881fae30374feb663e4ebfbf9c83"} err="failed to get container status \"292f32c28448c6c14ce48ba64803f36bb682881fae30374feb663e4ebfbf9c83\": rpc error: code = NotFound desc = could not find container \"292f32c28448c6c14ce48ba64803f36bb682881fae30374feb663e4ebfbf9c83\": container with ID starting with 292f32c28448c6c14ce48ba64803f36bb682881fae30374feb663e4ebfbf9c83 not found: ID does not exist" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.916539 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0e86b9b2-7ce9-424a-8634-f574f186d330" (UID: "0e86b9b2-7ce9-424a-8634-f574f186d330"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.973786 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e86b9b2-7ce9-424a-8634-f574f186d330" (UID: "0e86b9b2-7ce9-424a-8634-f574f186d330"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.987478 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-config-data" (OuterVolumeSpecName: "config-data") pod "0e86b9b2-7ce9-424a-8634-f574f186d330" (UID: "0e86b9b2-7ce9-424a-8634-f574f186d330"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.999927 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.999962 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf5sj\" (UniqueName: \"kubernetes.io/projected/0e86b9b2-7ce9-424a-8634-f574f186d330-kube-api-access-pf5sj\") on node \"crc\" DevicePath \"\"" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.999973 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:00:30 crc kubenswrapper[4813]: I0217 09:00:30.999984 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:30.999994 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e86b9b2-7ce9-424a-8634-f574f186d330-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.000002 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e86b9b2-7ce9-424a-8634-f574f186d330-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.182694 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.188880 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.205795 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:00:31 crc kubenswrapper[4813]: E0217 09:00:31.206194 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b0dc5e-424c-4953-b273-62849bf61f9a" containerName="mariadb-database-create" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.206215 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b0dc5e-424c-4953-b273-62849bf61f9a" containerName="mariadb-database-create" Feb 17 09:00:31 crc kubenswrapper[4813]: E0217 09:00:31.206245 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff3f839c-ec07-474e-96a4-7e7e3623584d" containerName="mariadb-account-create-update" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.206256 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff3f839c-ec07-474e-96a4-7e7e3623584d" containerName="mariadb-account-create-update" Feb 17 09:00:31 crc kubenswrapper[4813]: E0217 09:00:31.206273 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerName="sg-core" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.206282 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerName="sg-core" Feb 17 09:00:31 crc kubenswrapper[4813]: E0217 09:00:31.206290 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerName="proxy-httpd" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.206299 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerName="proxy-httpd" Feb 17 09:00:31 crc kubenswrapper[4813]: E0217 09:00:31.206341 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerName="ceilometer-central-agent" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.206350 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerName="ceilometer-central-agent" Feb 17 09:00:31 crc kubenswrapper[4813]: E0217 09:00:31.206363 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerName="ceilometer-notification-agent" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.206370 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerName="ceilometer-notification-agent" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.206545 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerName="ceilometer-central-agent" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.206559 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerName="ceilometer-notification-agent" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.206574 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerName="sg-core" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.206588 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" containerName="proxy-httpd" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.206604 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff3f839c-ec07-474e-96a4-7e7e3623584d" containerName="mariadb-account-create-update" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.206619 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b0dc5e-424c-4953-b273-62849bf61f9a" containerName="mariadb-database-create" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.210455 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.214880 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.215106 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.215338 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.235275 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.303630 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.303669 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.303709 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810baf22-3810-44d8-8a95-b19474b69078-run-httpd\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.303724 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvqhs\" (UniqueName: \"kubernetes.io/projected/810baf22-3810-44d8-8a95-b19474b69078-kube-api-access-pvqhs\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.303756 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-config-data\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.303779 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810baf22-3810-44d8-8a95-b19474b69078-log-httpd\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.303796 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.303828 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-scripts\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.404847 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810baf22-3810-44d8-8a95-b19474b69078-run-httpd\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.404896 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvqhs\" (UniqueName: \"kubernetes.io/projected/810baf22-3810-44d8-8a95-b19474b69078-kube-api-access-pvqhs\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.404932 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-config-data\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.404958 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810baf22-3810-44d8-8a95-b19474b69078-log-httpd\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.404975 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.405008 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-scripts\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.405050 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.405069 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.406057 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810baf22-3810-44d8-8a95-b19474b69078-log-httpd\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.406343 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810baf22-3810-44d8-8a95-b19474b69078-run-httpd\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.412296 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.413803 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.414047 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.414626 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-config-data\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.421922 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-scripts\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.425213 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvqhs\" (UniqueName: \"kubernetes.io/projected/810baf22-3810-44d8-8a95-b19474b69078-kube-api-access-pvqhs\") pod \"ceilometer-0\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.525716 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:31 crc kubenswrapper[4813]: I0217 09:00:31.897401 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:00:32 crc kubenswrapper[4813]: I0217 09:00:32.858606 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"810baf22-3810-44d8-8a95-b19474b69078","Type":"ContainerStarted","Data":"6bf8ab15d9095d275302a06a5d2e200da735ea056ac90ad705004e01603d426b"} Feb 17 09:00:32 crc kubenswrapper[4813]: I0217 09:00:32.858910 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"810baf22-3810-44d8-8a95-b19474b69078","Type":"ContainerStarted","Data":"a76dbc348e6e6c3cb1a3debdeec7cd81f1f353f36dc251b6be8e85c078d2d95e"} Feb 17 09:00:32 crc kubenswrapper[4813]: I0217 09:00:32.864885 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7"] Feb 17 09:00:32 crc kubenswrapper[4813]: I0217 09:00:32.865924 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" Feb 17 09:00:32 crc kubenswrapper[4813]: I0217 09:00:32.869703 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 17 09:00:32 crc kubenswrapper[4813]: I0217 09:00:32.870023 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-qksfh" Feb 17 09:00:32 crc kubenswrapper[4813]: I0217 09:00:32.883672 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7"] Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.034907 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-db-sync-config-data\") pod \"watcher-kuttl-db-sync-z5bm7\" (UID: \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.035006 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-z5bm7\" (UID: \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.035037 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbmq8\" (UniqueName: \"kubernetes.io/projected/cc013ac4-e8b4-4f10-991c-bd08de8bc164-kube-api-access-dbmq8\") pod \"watcher-kuttl-db-sync-z5bm7\" (UID: \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.035505 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-config-data\") pod \"watcher-kuttl-db-sync-z5bm7\" (UID: \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.119605 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e86b9b2-7ce9-424a-8634-f574f186d330" path="/var/lib/kubelet/pods/0e86b9b2-7ce9-424a-8634-f574f186d330/volumes" Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.136714 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-config-data\") pod \"watcher-kuttl-db-sync-z5bm7\" (UID: \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.136779 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-db-sync-config-data\") pod \"watcher-kuttl-db-sync-z5bm7\" (UID: \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.136810 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbmq8\" (UniqueName: \"kubernetes.io/projected/cc013ac4-e8b4-4f10-991c-bd08de8bc164-kube-api-access-dbmq8\") pod \"watcher-kuttl-db-sync-z5bm7\" (UID: \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.136826 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-z5bm7\" (UID: \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.141022 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-db-sync-config-data\") pod \"watcher-kuttl-db-sync-z5bm7\" (UID: \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.151661 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-z5bm7\" (UID: \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.151875 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-config-data\") pod \"watcher-kuttl-db-sync-z5bm7\" (UID: \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.165901 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbmq8\" (UniqueName: \"kubernetes.io/projected/cc013ac4-e8b4-4f10-991c-bd08de8bc164-kube-api-access-dbmq8\") pod \"watcher-kuttl-db-sync-z5bm7\" (UID: \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.184735 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" Feb 17 09:00:33 crc kubenswrapper[4813]: W0217 09:00:33.603974 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc013ac4_e8b4_4f10_991c_bd08de8bc164.slice/crio-c2212a69bf956ff1289c2bebf02ee3f512e38c607983d51e4afde3b92ed873ab WatchSource:0}: Error finding container c2212a69bf956ff1289c2bebf02ee3f512e38c607983d51e4afde3b92ed873ab: Status 404 returned error can't find the container with id c2212a69bf956ff1289c2bebf02ee3f512e38c607983d51e4afde3b92ed873ab Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.614088 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7"] Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.884026 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"810baf22-3810-44d8-8a95-b19474b69078","Type":"ContainerStarted","Data":"b00a29a225ec35654eab3bd77882bfdf9ce4493ac1803b48b81c412e5af96c07"} Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.884248 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"810baf22-3810-44d8-8a95-b19474b69078","Type":"ContainerStarted","Data":"67fdfbe879ffd42a74b10fc4d45b27c575947150e6f3607107438e9479f45da2"} Feb 17 09:00:33 crc kubenswrapper[4813]: I0217 09:00:33.885964 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" event={"ID":"cc013ac4-e8b4-4f10-991c-bd08de8bc164","Type":"ContainerStarted","Data":"c2212a69bf956ff1289c2bebf02ee3f512e38c607983d51e4afde3b92ed873ab"} Feb 17 09:00:35 crc kubenswrapper[4813]: I0217 09:00:35.912189 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"810baf22-3810-44d8-8a95-b19474b69078","Type":"ContainerStarted","Data":"cd058e445f6fdfe96bf46f3775951354fcd472fd55fc2a6499b783046f849439"} Feb 17 09:00:35 crc kubenswrapper[4813]: I0217 09:00:35.913634 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:00:35 crc kubenswrapper[4813]: I0217 09:00:35.958171 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.072505969 podStartE2EDuration="4.958155631s" podCreationTimestamp="2026-02-17 09:00:31 +0000 UTC" firstStartedPulling="2026-02-17 09:00:31.901443513 +0000 UTC m=+1179.562204726" lastFinishedPulling="2026-02-17 09:00:34.787093165 +0000 UTC m=+1182.447854388" observedRunningTime="2026-02-17 09:00:35.951864202 +0000 UTC m=+1183.612625425" watchObservedRunningTime="2026-02-17 09:00:35.958155631 +0000 UTC m=+1183.618916854" Feb 17 09:00:36 crc kubenswrapper[4813]: I0217 09:00:36.436288 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Feb 17 09:00:48 crc kubenswrapper[4813]: E0217 09:00:48.575095 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Feb 17 09:00:48 crc kubenswrapper[4813]: E0217 09:00:48.575830 4813 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Feb 17 09:00:48 crc kubenswrapper[4813]: E0217 09:00:48.576004 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-kuttl-db-sync,Image:38.102.83.9:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dbmq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-kuttl-db-sync-z5bm7_watcher-kuttl-default(cc013ac4-e8b4-4f10-991c-bd08de8bc164): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 09:00:48 crc kubenswrapper[4813]: E0217 09:00:48.577561 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" podUID="cc013ac4-e8b4-4f10-991c-bd08de8bc164" Feb 17 09:00:49 crc kubenswrapper[4813]: E0217 09:00:49.025989 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" podUID="cc013ac4-e8b4-4f10-991c-bd08de8bc164" Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.184219 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-cron-29521981-fxkjh"] Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.185895 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.224140 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-cron-29521981-fxkjh"] Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.265884 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkfgl\" (UniqueName: \"kubernetes.io/projected/88bf85eb-632d-49ec-a4be-f2724e69ae9a-kube-api-access-xkfgl\") pod \"keystone-cron-29521981-fxkjh\" (UID: \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\") " pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.265997 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-config-data\") pod \"keystone-cron-29521981-fxkjh\" (UID: \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\") " pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.266096 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-fernet-keys\") pod \"keystone-cron-29521981-fxkjh\" (UID: \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\") " pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.266160 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-combined-ca-bundle\") pod \"keystone-cron-29521981-fxkjh\" (UID: \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\") " pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.367374 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkfgl\" (UniqueName: \"kubernetes.io/projected/88bf85eb-632d-49ec-a4be-f2724e69ae9a-kube-api-access-xkfgl\") pod \"keystone-cron-29521981-fxkjh\" (UID: \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\") " pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.367411 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-config-data\") pod \"keystone-cron-29521981-fxkjh\" (UID: \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\") " pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.367460 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-fernet-keys\") pod \"keystone-cron-29521981-fxkjh\" (UID: \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\") " pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.367525 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-combined-ca-bundle\") pod \"keystone-cron-29521981-fxkjh\" (UID: \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\") " pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.370775 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-fernet-keys\") pod \"keystone-cron-29521981-fxkjh\" (UID: \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\") " pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.370871 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-combined-ca-bundle\") pod \"keystone-cron-29521981-fxkjh\" (UID: \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\") " pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.372486 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-config-data\") pod \"keystone-cron-29521981-fxkjh\" (UID: \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\") " pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.382490 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkfgl\" (UniqueName: \"kubernetes.io/projected/88bf85eb-632d-49ec-a4be-f2724e69ae9a-kube-api-access-xkfgl\") pod \"keystone-cron-29521981-fxkjh\" (UID: \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\") " pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.505633 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.691736 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" event={"ID":"cc013ac4-e8b4-4f10-991c-bd08de8bc164","Type":"ContainerStarted","Data":"22252909237a13bef155d264aba57c53fa577b41ebae70a5104af419ea7abc18"} Feb 17 09:01:00 crc kubenswrapper[4813]: I0217 09:01:00.720032 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" podStartSLOduration=2.099894809 podStartE2EDuration="28.720005917s" podCreationTimestamp="2026-02-17 09:00:32 +0000 UTC" firstStartedPulling="2026-02-17 09:00:33.605827188 +0000 UTC m=+1181.266588411" lastFinishedPulling="2026-02-17 09:01:00.225938256 +0000 UTC m=+1207.886699519" observedRunningTime="2026-02-17 09:01:00.712811642 +0000 UTC m=+1208.373572885" watchObservedRunningTime="2026-02-17 09:01:00.720005917 +0000 UTC m=+1208.380767150" Feb 17 09:01:01 crc kubenswrapper[4813]: I0217 09:01:01.022715 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-cron-29521981-fxkjh"] Feb 17 09:01:01 crc kubenswrapper[4813]: W0217 09:01:01.025507 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88bf85eb_632d_49ec_a4be_f2724e69ae9a.slice/crio-3f90f797f73f5c1fd7e6090f88d84ea4f7d646666280b22ee80339665984a405 WatchSource:0}: Error finding container 3f90f797f73f5c1fd7e6090f88d84ea4f7d646666280b22ee80339665984a405: Status 404 returned error can't find the container with id 3f90f797f73f5c1fd7e6090f88d84ea4f7d646666280b22ee80339665984a405 Feb 17 09:01:01 crc kubenswrapper[4813]: I0217 09:01:01.537099 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:01 crc kubenswrapper[4813]: I0217 09:01:01.700560 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" event={"ID":"88bf85eb-632d-49ec-a4be-f2724e69ae9a","Type":"ContainerStarted","Data":"1e321852e39c98b77e726389e7a82b34905403028f6b94745a3b3a70bc34d386"} Feb 17 09:01:01 crc kubenswrapper[4813]: I0217 09:01:01.700901 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" event={"ID":"88bf85eb-632d-49ec-a4be-f2724e69ae9a","Type":"ContainerStarted","Data":"3f90f797f73f5c1fd7e6090f88d84ea4f7d646666280b22ee80339665984a405"} Feb 17 09:01:01 crc kubenswrapper[4813]: I0217 09:01:01.726252 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" podStartSLOduration=1.726234683 podStartE2EDuration="1.726234683s" podCreationTimestamp="2026-02-17 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:01:01.72367625 +0000 UTC m=+1209.384437473" watchObservedRunningTime="2026-02-17 09:01:01.726234683 +0000 UTC m=+1209.386995906" Feb 17 09:01:03 crc kubenswrapper[4813]: I0217 09:01:03.734626 4813 generic.go:334] "Generic (PLEG): container finished" podID="88bf85eb-632d-49ec-a4be-f2724e69ae9a" containerID="1e321852e39c98b77e726389e7a82b34905403028f6b94745a3b3a70bc34d386" exitCode=0 Feb 17 09:01:03 crc kubenswrapper[4813]: I0217 09:01:03.734798 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" event={"ID":"88bf85eb-632d-49ec-a4be-f2724e69ae9a","Type":"ContainerDied","Data":"1e321852e39c98b77e726389e7a82b34905403028f6b94745a3b3a70bc34d386"} Feb 17 09:01:03 crc kubenswrapper[4813]: I0217 09:01:03.737795 4813 generic.go:334] "Generic (PLEG): container finished" podID="cc013ac4-e8b4-4f10-991c-bd08de8bc164" containerID="22252909237a13bef155d264aba57c53fa577b41ebae70a5104af419ea7abc18" exitCode=0 Feb 17 09:01:03 crc kubenswrapper[4813]: I0217 09:01:03.737864 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" event={"ID":"cc013ac4-e8b4-4f10-991c-bd08de8bc164","Type":"ContainerDied","Data":"22252909237a13bef155d264aba57c53fa577b41ebae70a5104af419ea7abc18"} Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.222588 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.227584 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.269011 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-config-data\") pod \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\" (UID: \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\") " Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.269125 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-combined-ca-bundle\") pod \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\" (UID: \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\") " Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.269460 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-db-sync-config-data\") pod \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\" (UID: \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\") " Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.269549 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbmq8\" (UniqueName: \"kubernetes.io/projected/cc013ac4-e8b4-4f10-991c-bd08de8bc164-kube-api-access-dbmq8\") pod \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\" (UID: \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\") " Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.269625 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkfgl\" (UniqueName: \"kubernetes.io/projected/88bf85eb-632d-49ec-a4be-f2724e69ae9a-kube-api-access-xkfgl\") pod \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\" (UID: \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\") " Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.269710 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-combined-ca-bundle\") pod \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\" (UID: \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\") " Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.269770 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-fernet-keys\") pod \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\" (UID: \"88bf85eb-632d-49ec-a4be-f2724e69ae9a\") " Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.269835 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-config-data\") pod \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\" (UID: \"cc013ac4-e8b4-4f10-991c-bd08de8bc164\") " Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.287365 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88bf85eb-632d-49ec-a4be-f2724e69ae9a-kube-api-access-xkfgl" (OuterVolumeSpecName: "kube-api-access-xkfgl") pod "88bf85eb-632d-49ec-a4be-f2724e69ae9a" (UID: "88bf85eb-632d-49ec-a4be-f2724e69ae9a"). InnerVolumeSpecName "kube-api-access-xkfgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.287878 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cc013ac4-e8b4-4f10-991c-bd08de8bc164" (UID: "cc013ac4-e8b4-4f10-991c-bd08de8bc164"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.287923 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "88bf85eb-632d-49ec-a4be-f2724e69ae9a" (UID: "88bf85eb-632d-49ec-a4be-f2724e69ae9a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.292523 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc013ac4-e8b4-4f10-991c-bd08de8bc164-kube-api-access-dbmq8" (OuterVolumeSpecName: "kube-api-access-dbmq8") pod "cc013ac4-e8b4-4f10-991c-bd08de8bc164" (UID: "cc013ac4-e8b4-4f10-991c-bd08de8bc164"). InnerVolumeSpecName "kube-api-access-dbmq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.308721 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88bf85eb-632d-49ec-a4be-f2724e69ae9a" (UID: "88bf85eb-632d-49ec-a4be-f2724e69ae9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.317778 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc013ac4-e8b4-4f10-991c-bd08de8bc164" (UID: "cc013ac4-e8b4-4f10-991c-bd08de8bc164"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.339911 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-config-data" (OuterVolumeSpecName: "config-data") pod "88bf85eb-632d-49ec-a4be-f2724e69ae9a" (UID: "88bf85eb-632d-49ec-a4be-f2724e69ae9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.360428 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-config-data" (OuterVolumeSpecName: "config-data") pod "cc013ac4-e8b4-4f10-991c-bd08de8bc164" (UID: "cc013ac4-e8b4-4f10-991c-bd08de8bc164"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.372243 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.372393 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbmq8\" (UniqueName: \"kubernetes.io/projected/cc013ac4-e8b4-4f10-991c-bd08de8bc164-kube-api-access-dbmq8\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.372482 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkfgl\" (UniqueName: \"kubernetes.io/projected/88bf85eb-632d-49ec-a4be-f2724e69ae9a-kube-api-access-xkfgl\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.372532 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.372582 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.372797 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.372843 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88bf85eb-632d-49ec-a4be-f2724e69ae9a-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.372892 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc013ac4-e8b4-4f10-991c-bd08de8bc164-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.760691 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.760684 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7" event={"ID":"cc013ac4-e8b4-4f10-991c-bd08de8bc164","Type":"ContainerDied","Data":"c2212a69bf956ff1289c2bebf02ee3f512e38c607983d51e4afde3b92ed873ab"} Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.760869 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2212a69bf956ff1289c2bebf02ee3f512e38c607983d51e4afde3b92ed873ab" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.763382 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" event={"ID":"88bf85eb-632d-49ec-a4be-f2724e69ae9a","Type":"ContainerDied","Data":"3f90f797f73f5c1fd7e6090f88d84ea4f7d646666280b22ee80339665984a405"} Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.763445 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f90f797f73f5c1fd7e6090f88d84ea4f7d646666280b22ee80339665984a405" Feb 17 09:01:05 crc kubenswrapper[4813]: I0217 09:01:05.763507 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29521981-fxkjh" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.046710 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:01:06 crc kubenswrapper[4813]: E0217 09:01:06.047225 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88bf85eb-632d-49ec-a4be-f2724e69ae9a" containerName="keystone-cron" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.047251 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="88bf85eb-632d-49ec-a4be-f2724e69ae9a" containerName="keystone-cron" Feb 17 09:01:06 crc kubenswrapper[4813]: E0217 09:01:06.047279 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc013ac4-e8b4-4f10-991c-bd08de8bc164" containerName="watcher-kuttl-db-sync" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.047326 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc013ac4-e8b4-4f10-991c-bd08de8bc164" containerName="watcher-kuttl-db-sync" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.047580 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc013ac4-e8b4-4f10-991c-bd08de8bc164" containerName="watcher-kuttl-db-sync" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.047619 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="88bf85eb-632d-49ec-a4be-f2724e69ae9a" containerName="keystone-cron" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.048891 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.050578 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.051320 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-qksfh" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.058015 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.082449 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.082504 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.082528 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae2be5c-4a94-4959-9c47-591de52f3770-logs\") pod \"watcher-kuttl-api-0\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.082546 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rqb7\" (UniqueName: \"kubernetes.io/projected/dae2be5c-4a94-4959-9c47-591de52f3770-kube-api-access-2rqb7\") pod \"watcher-kuttl-api-0\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.082615 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.150681 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.151752 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.154741 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.160993 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.184544 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.184603 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae2be5c-4a94-4959-9c47-591de52f3770-logs\") pod \"watcher-kuttl-api-0\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.184629 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rqb7\" (UniqueName: \"kubernetes.io/projected/dae2be5c-4a94-4959-9c47-591de52f3770-kube-api-access-2rqb7\") pod \"watcher-kuttl-api-0\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.184667 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.184724 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.184746 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.184763 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d69sn\" (UniqueName: \"kubernetes.io/projected/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-kube-api-access-d69sn\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.184793 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.184832 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.184853 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.185030 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae2be5c-4a94-4959-9c47-591de52f3770-logs\") pod \"watcher-kuttl-api-0\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.189811 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.192770 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.194796 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.204488 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rqb7\" (UniqueName: \"kubernetes.io/projected/dae2be5c-4a94-4959-9c47-591de52f3770-kube-api-access-2rqb7\") pod \"watcher-kuttl-api-0\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.217062 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.218011 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.220707 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.231626 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.285840 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.286132 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d69sn\" (UniqueName: \"kubernetes.io/projected/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-kube-api-access-d69sn\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.286191 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.286259 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.286369 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.286637 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.289289 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.289731 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.290373 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.302026 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d69sn\" (UniqueName: \"kubernetes.io/projected/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-kube-api-access-d69sn\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.364603 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.389857 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb97r\" (UniqueName: \"kubernetes.io/projected/085554f2-90cd-4173-897d-25e5bc0dd6e2-kube-api-access-cb97r\") pod \"watcher-kuttl-applier-0\" (UID: \"085554f2-90cd-4173-897d-25e5bc0dd6e2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.389923 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085554f2-90cd-4173-897d-25e5bc0dd6e2-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"085554f2-90cd-4173-897d-25e5bc0dd6e2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.389968 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/085554f2-90cd-4173-897d-25e5bc0dd6e2-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"085554f2-90cd-4173-897d-25e5bc0dd6e2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.390021 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085554f2-90cd-4173-897d-25e5bc0dd6e2-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"085554f2-90cd-4173-897d-25e5bc0dd6e2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.491726 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb97r\" (UniqueName: \"kubernetes.io/projected/085554f2-90cd-4173-897d-25e5bc0dd6e2-kube-api-access-cb97r\") pod \"watcher-kuttl-applier-0\" (UID: \"085554f2-90cd-4173-897d-25e5bc0dd6e2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.491794 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085554f2-90cd-4173-897d-25e5bc0dd6e2-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"085554f2-90cd-4173-897d-25e5bc0dd6e2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.491828 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/085554f2-90cd-4173-897d-25e5bc0dd6e2-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"085554f2-90cd-4173-897d-25e5bc0dd6e2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.491873 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085554f2-90cd-4173-897d-25e5bc0dd6e2-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"085554f2-90cd-4173-897d-25e5bc0dd6e2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.492355 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/085554f2-90cd-4173-897d-25e5bc0dd6e2-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"085554f2-90cd-4173-897d-25e5bc0dd6e2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.496131 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085554f2-90cd-4173-897d-25e5bc0dd6e2-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"085554f2-90cd-4173-897d-25e5bc0dd6e2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.496271 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085554f2-90cd-4173-897d-25e5bc0dd6e2-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"085554f2-90cd-4173-897d-25e5bc0dd6e2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.510835 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb97r\" (UniqueName: \"kubernetes.io/projected/085554f2-90cd-4173-897d-25e5bc0dd6e2-kube-api-access-cb97r\") pod \"watcher-kuttl-applier-0\" (UID: \"085554f2-90cd-4173-897d-25e5bc0dd6e2\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.564181 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.572595 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:06 crc kubenswrapper[4813]: I0217 09:01:06.792381 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:01:06 crc kubenswrapper[4813]: W0217 09:01:06.793322 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddae2be5c_4a94_4959_9c47_591de52f3770.slice/crio-5b3e97fe64891eb8893d5fb252db8541981f2779eb2cb78df3b081d56b7c80e9 WatchSource:0}: Error finding container 5b3e97fe64891eb8893d5fb252db8541981f2779eb2cb78df3b081d56b7c80e9: Status 404 returned error can't find the container with id 5b3e97fe64891eb8893d5fb252db8541981f2779eb2cb78df3b081d56b7c80e9 Feb 17 09:01:07 crc kubenswrapper[4813]: I0217 09:01:07.056770 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:01:07 crc kubenswrapper[4813]: I0217 09:01:07.080711 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:01:07 crc kubenswrapper[4813]: W0217 09:01:07.083041 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod085554f2_90cd_4173_897d_25e5bc0dd6e2.slice/crio-d7826d5420000bdfb13e22d4692c2c4184b5d92bda919720cf53cd05520b7973 WatchSource:0}: Error finding container d7826d5420000bdfb13e22d4692c2c4184b5d92bda919720cf53cd05520b7973: Status 404 returned error can't find the container with id d7826d5420000bdfb13e22d4692c2c4184b5d92bda919720cf53cd05520b7973 Feb 17 09:01:07 crc kubenswrapper[4813]: I0217 09:01:07.782133 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"085554f2-90cd-4173-897d-25e5bc0dd6e2","Type":"ContainerStarted","Data":"d7826d5420000bdfb13e22d4692c2c4184b5d92bda919720cf53cd05520b7973"} Feb 17 09:01:07 crc kubenswrapper[4813]: I0217 09:01:07.785845 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dae2be5c-4a94-4959-9c47-591de52f3770","Type":"ContainerStarted","Data":"86a6b9d82781805b410be71ce42dbc4b5a5b92494abffef07212f4e2899b0612"} Feb 17 09:01:07 crc kubenswrapper[4813]: I0217 09:01:07.785879 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dae2be5c-4a94-4959-9c47-591de52f3770","Type":"ContainerStarted","Data":"53232025c6be4189adec498e04e187ff98802424d26b7cc0800a8a88b76d83ae"} Feb 17 09:01:07 crc kubenswrapper[4813]: I0217 09:01:07.785896 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dae2be5c-4a94-4959-9c47-591de52f3770","Type":"ContainerStarted","Data":"5b3e97fe64891eb8893d5fb252db8541981f2779eb2cb78df3b081d56b7c80e9"} Feb 17 09:01:07 crc kubenswrapper[4813]: I0217 09:01:07.786058 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:07 crc kubenswrapper[4813]: I0217 09:01:07.788655 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1dacf5a3-0ba2-4329-8b57-d473f77dbf16","Type":"ContainerStarted","Data":"6ea27b5240318f3f2dd9eb581b8c9c6e2339fe854de7dbd5809d2d3810f693af"} Feb 17 09:01:07 crc kubenswrapper[4813]: I0217 09:01:07.813713 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.813691622 podStartE2EDuration="1.813691622s" podCreationTimestamp="2026-02-17 09:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:01:07.806724754 +0000 UTC m=+1215.467485987" watchObservedRunningTime="2026-02-17 09:01:07.813691622 +0000 UTC m=+1215.474452845" Feb 17 09:01:08 crc kubenswrapper[4813]: I0217 09:01:08.797188 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"085554f2-90cd-4173-897d-25e5bc0dd6e2","Type":"ContainerStarted","Data":"6e3361509822b512e580c5debb412df7dfada323f5bc41e0badf0c3f3ffe8bea"} Feb 17 09:01:08 crc kubenswrapper[4813]: I0217 09:01:08.800810 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1dacf5a3-0ba2-4329-8b57-d473f77dbf16","Type":"ContainerStarted","Data":"3b0ba5ff9739971bffe1df0048f619b7d460005773ef4f644057f6fd1f708cde"} Feb 17 09:01:08 crc kubenswrapper[4813]: I0217 09:01:08.817284 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.7523326479999999 podStartE2EDuration="2.817263233s" podCreationTimestamp="2026-02-17 09:01:06 +0000 UTC" firstStartedPulling="2026-02-17 09:01:07.08549639 +0000 UTC m=+1214.746257613" lastFinishedPulling="2026-02-17 09:01:08.150426975 +0000 UTC m=+1215.811188198" observedRunningTime="2026-02-17 09:01:08.811667843 +0000 UTC m=+1216.472429086" watchObservedRunningTime="2026-02-17 09:01:08.817263233 +0000 UTC m=+1216.478024466" Feb 17 09:01:08 crc kubenswrapper[4813]: I0217 09:01:08.834578 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.74645984 podStartE2EDuration="2.834561235s" podCreationTimestamp="2026-02-17 09:01:06 +0000 UTC" firstStartedPulling="2026-02-17 09:01:07.053104978 +0000 UTC m=+1214.713866211" lastFinishedPulling="2026-02-17 09:01:08.141206383 +0000 UTC m=+1215.801967606" observedRunningTime="2026-02-17 09:01:08.83089448 +0000 UTC m=+1216.491655723" watchObservedRunningTime="2026-02-17 09:01:08.834561235 +0000 UTC m=+1216.495322468" Feb 17 09:01:09 crc kubenswrapper[4813]: I0217 09:01:09.811151 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:11 crc kubenswrapper[4813]: I0217 09:01:11.365349 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:11 crc kubenswrapper[4813]: I0217 09:01:11.572718 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:16 crc kubenswrapper[4813]: I0217 09:01:16.365503 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:16 crc kubenswrapper[4813]: I0217 09:01:16.373752 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:16 crc kubenswrapper[4813]: I0217 09:01:16.565137 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:16 crc kubenswrapper[4813]: I0217 09:01:16.573477 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:16 crc kubenswrapper[4813]: I0217 09:01:16.608781 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:16 crc kubenswrapper[4813]: I0217 09:01:16.609123 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:16 crc kubenswrapper[4813]: I0217 09:01:16.864115 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:16 crc kubenswrapper[4813]: I0217 09:01:16.869107 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:16 crc kubenswrapper[4813]: I0217 09:01:16.884846 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:16 crc kubenswrapper[4813]: I0217 09:01:16.893684 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:18 crc kubenswrapper[4813]: I0217 09:01:18.095978 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:01:18 crc kubenswrapper[4813]: I0217 09:01:18.096228 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="810baf22-3810-44d8-8a95-b19474b69078" containerName="ceilometer-central-agent" containerID="cri-o://6bf8ab15d9095d275302a06a5d2e200da735ea056ac90ad705004e01603d426b" gracePeriod=30 Feb 17 09:01:18 crc kubenswrapper[4813]: I0217 09:01:18.096294 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="810baf22-3810-44d8-8a95-b19474b69078" containerName="proxy-httpd" containerID="cri-o://cd058e445f6fdfe96bf46f3775951354fcd472fd55fc2a6499b783046f849439" gracePeriod=30 Feb 17 09:01:18 crc kubenswrapper[4813]: I0217 09:01:18.096386 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="810baf22-3810-44d8-8a95-b19474b69078" containerName="ceilometer-notification-agent" containerID="cri-o://67fdfbe879ffd42a74b10fc4d45b27c575947150e6f3607107438e9479f45da2" gracePeriod=30 Feb 17 09:01:18 crc kubenswrapper[4813]: I0217 09:01:18.096432 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="810baf22-3810-44d8-8a95-b19474b69078" containerName="sg-core" containerID="cri-o://b00a29a225ec35654eab3bd77882bfdf9ce4493ac1803b48b81c412e5af96c07" gracePeriod=30 Feb 17 09:01:18 crc kubenswrapper[4813]: I0217 09:01:18.893427 4813 generic.go:334] "Generic (PLEG): container finished" podID="810baf22-3810-44d8-8a95-b19474b69078" containerID="cd058e445f6fdfe96bf46f3775951354fcd472fd55fc2a6499b783046f849439" exitCode=0 Feb 17 09:01:18 crc kubenswrapper[4813]: I0217 09:01:18.893465 4813 generic.go:334] "Generic (PLEG): container finished" podID="810baf22-3810-44d8-8a95-b19474b69078" containerID="b00a29a225ec35654eab3bd77882bfdf9ce4493ac1803b48b81c412e5af96c07" exitCode=2 Feb 17 09:01:18 crc kubenswrapper[4813]: I0217 09:01:18.893476 4813 generic.go:334] "Generic (PLEG): container finished" podID="810baf22-3810-44d8-8a95-b19474b69078" containerID="6bf8ab15d9095d275302a06a5d2e200da735ea056ac90ad705004e01603d426b" exitCode=0 Feb 17 09:01:18 crc kubenswrapper[4813]: I0217 09:01:18.893449 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"810baf22-3810-44d8-8a95-b19474b69078","Type":"ContainerDied","Data":"cd058e445f6fdfe96bf46f3775951354fcd472fd55fc2a6499b783046f849439"} Feb 17 09:01:18 crc kubenswrapper[4813]: I0217 09:01:18.893772 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"810baf22-3810-44d8-8a95-b19474b69078","Type":"ContainerDied","Data":"b00a29a225ec35654eab3bd77882bfdf9ce4493ac1803b48b81c412e5af96c07"} Feb 17 09:01:18 crc kubenswrapper[4813]: I0217 09:01:18.893868 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"810baf22-3810-44d8-8a95-b19474b69078","Type":"ContainerDied","Data":"6bf8ab15d9095d275302a06a5d2e200da735ea056ac90ad705004e01603d426b"} Feb 17 09:01:19 crc kubenswrapper[4813]: I0217 09:01:19.684688 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:01:19 crc kubenswrapper[4813]: I0217 09:01:19.685222 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="085554f2-90cd-4173-897d-25e5bc0dd6e2" containerName="watcher-applier" containerID="cri-o://6e3361509822b512e580c5debb412df7dfada323f5bc41e0badf0c3f3ffe8bea" gracePeriod=30 Feb 17 09:01:19 crc kubenswrapper[4813]: I0217 09:01:19.699263 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:01:19 crc kubenswrapper[4813]: I0217 09:01:19.717983 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:01:19 crc kubenswrapper[4813]: I0217 09:01:19.718224 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="dae2be5c-4a94-4959-9c47-591de52f3770" containerName="watcher-kuttl-api-log" containerID="cri-o://53232025c6be4189adec498e04e187ff98802424d26b7cc0800a8a88b76d83ae" gracePeriod=30 Feb 17 09:01:19 crc kubenswrapper[4813]: I0217 09:01:19.718354 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="dae2be5c-4a94-4959-9c47-591de52f3770" containerName="watcher-api" containerID="cri-o://86a6b9d82781805b410be71ce42dbc4b5a5b92494abffef07212f4e2899b0612" gracePeriod=30 Feb 17 09:01:19 crc kubenswrapper[4813]: I0217 09:01:19.903120 4813 generic.go:334] "Generic (PLEG): container finished" podID="dae2be5c-4a94-4959-9c47-591de52f3770" containerID="53232025c6be4189adec498e04e187ff98802424d26b7cc0800a8a88b76d83ae" exitCode=143 Feb 17 09:01:19 crc kubenswrapper[4813]: I0217 09:01:19.903197 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dae2be5c-4a94-4959-9c47-591de52f3770","Type":"ContainerDied","Data":"53232025c6be4189adec498e04e187ff98802424d26b7cc0800a8a88b76d83ae"} Feb 17 09:01:19 crc kubenswrapper[4813]: I0217 09:01:19.903333 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="1dacf5a3-0ba2-4329-8b57-d473f77dbf16" containerName="watcher-decision-engine" containerID="cri-o://3b0ba5ff9739971bffe1df0048f619b7d460005773ef4f644057f6fd1f708cde" gracePeriod=30 Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.673632 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.782482 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-custom-prometheus-ca\") pod \"dae2be5c-4a94-4959-9c47-591de52f3770\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.782575 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-combined-ca-bundle\") pod \"dae2be5c-4a94-4959-9c47-591de52f3770\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.782693 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rqb7\" (UniqueName: \"kubernetes.io/projected/dae2be5c-4a94-4959-9c47-591de52f3770-kube-api-access-2rqb7\") pod \"dae2be5c-4a94-4959-9c47-591de52f3770\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.782720 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-config-data\") pod \"dae2be5c-4a94-4959-9c47-591de52f3770\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.782736 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae2be5c-4a94-4959-9c47-591de52f3770-logs\") pod \"dae2be5c-4a94-4959-9c47-591de52f3770\" (UID: \"dae2be5c-4a94-4959-9c47-591de52f3770\") " Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.783370 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dae2be5c-4a94-4959-9c47-591de52f3770-logs" (OuterVolumeSpecName: "logs") pod "dae2be5c-4a94-4959-9c47-591de52f3770" (UID: "dae2be5c-4a94-4959-9c47-591de52f3770"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.792201 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae2be5c-4a94-4959-9c47-591de52f3770-kube-api-access-2rqb7" (OuterVolumeSpecName: "kube-api-access-2rqb7") pod "dae2be5c-4a94-4959-9c47-591de52f3770" (UID: "dae2be5c-4a94-4959-9c47-591de52f3770"). InnerVolumeSpecName "kube-api-access-2rqb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.820412 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dae2be5c-4a94-4959-9c47-591de52f3770" (UID: "dae2be5c-4a94-4959-9c47-591de52f3770"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.856059 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-config-data" (OuterVolumeSpecName: "config-data") pod "dae2be5c-4a94-4959-9c47-591de52f3770" (UID: "dae2be5c-4a94-4959-9c47-591de52f3770"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.859491 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "dae2be5c-4a94-4959-9c47-591de52f3770" (UID: "dae2be5c-4a94-4959-9c47-591de52f3770"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.884960 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.884996 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rqb7\" (UniqueName: \"kubernetes.io/projected/dae2be5c-4a94-4959-9c47-591de52f3770-kube-api-access-2rqb7\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.885007 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.885015 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dae2be5c-4a94-4959-9c47-591de52f3770-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.885026 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dae2be5c-4a94-4959-9c47-591de52f3770-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.918533 4813 generic.go:334] "Generic (PLEG): container finished" podID="810baf22-3810-44d8-8a95-b19474b69078" containerID="67fdfbe879ffd42a74b10fc4d45b27c575947150e6f3607107438e9479f45da2" exitCode=0 Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.918587 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"810baf22-3810-44d8-8a95-b19474b69078","Type":"ContainerDied","Data":"67fdfbe879ffd42a74b10fc4d45b27c575947150e6f3607107438e9479f45da2"} Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.922075 4813 generic.go:334] "Generic (PLEG): container finished" podID="dae2be5c-4a94-4959-9c47-591de52f3770" containerID="86a6b9d82781805b410be71ce42dbc4b5a5b92494abffef07212f4e2899b0612" exitCode=0 Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.922111 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dae2be5c-4a94-4959-9c47-591de52f3770","Type":"ContainerDied","Data":"86a6b9d82781805b410be71ce42dbc4b5a5b92494abffef07212f4e2899b0612"} Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.922128 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dae2be5c-4a94-4959-9c47-591de52f3770","Type":"ContainerDied","Data":"5b3e97fe64891eb8893d5fb252db8541981f2779eb2cb78df3b081d56b7c80e9"} Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.922143 4813 scope.go:117] "RemoveContainer" containerID="86a6b9d82781805b410be71ce42dbc4b5a5b92494abffef07212f4e2899b0612" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.922285 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.925159 4813 generic.go:334] "Generic (PLEG): container finished" podID="1dacf5a3-0ba2-4329-8b57-d473f77dbf16" containerID="3b0ba5ff9739971bffe1df0048f619b7d460005773ef4f644057f6fd1f708cde" exitCode=0 Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.925196 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1dacf5a3-0ba2-4329-8b57-d473f77dbf16","Type":"ContainerDied","Data":"3b0ba5ff9739971bffe1df0048f619b7d460005773ef4f644057f6fd1f708cde"} Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.949199 4813 scope.go:117] "RemoveContainer" containerID="53232025c6be4189adec498e04e187ff98802424d26b7cc0800a8a88b76d83ae" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.983775 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.991130 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.997882 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:01:20 crc kubenswrapper[4813]: E0217 09:01:20.998192 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae2be5c-4a94-4959-9c47-591de52f3770" containerName="watcher-api" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.998204 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae2be5c-4a94-4959-9c47-591de52f3770" containerName="watcher-api" Feb 17 09:01:20 crc kubenswrapper[4813]: E0217 09:01:20.998231 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae2be5c-4a94-4959-9c47-591de52f3770" containerName="watcher-kuttl-api-log" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.998237 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae2be5c-4a94-4959-9c47-591de52f3770" containerName="watcher-kuttl-api-log" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.998407 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae2be5c-4a94-4959-9c47-591de52f3770" containerName="watcher-api" Feb 17 09:01:20 crc kubenswrapper[4813]: I0217 09:01:20.998426 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae2be5c-4a94-4959-9c47-591de52f3770" containerName="watcher-kuttl-api-log" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.013253 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.013367 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.018738 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.038085 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.054755 4813 scope.go:117] "RemoveContainer" containerID="86a6b9d82781805b410be71ce42dbc4b5a5b92494abffef07212f4e2899b0612" Feb 17 09:01:21 crc kubenswrapper[4813]: E0217 09:01:21.063409 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86a6b9d82781805b410be71ce42dbc4b5a5b92494abffef07212f4e2899b0612\": container with ID starting with 86a6b9d82781805b410be71ce42dbc4b5a5b92494abffef07212f4e2899b0612 not found: ID does not exist" containerID="86a6b9d82781805b410be71ce42dbc4b5a5b92494abffef07212f4e2899b0612" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.063452 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86a6b9d82781805b410be71ce42dbc4b5a5b92494abffef07212f4e2899b0612"} err="failed to get container status \"86a6b9d82781805b410be71ce42dbc4b5a5b92494abffef07212f4e2899b0612\": rpc error: code = NotFound desc = could not find container \"86a6b9d82781805b410be71ce42dbc4b5a5b92494abffef07212f4e2899b0612\": container with ID starting with 86a6b9d82781805b410be71ce42dbc4b5a5b92494abffef07212f4e2899b0612 not found: ID does not exist" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.063477 4813 scope.go:117] "RemoveContainer" containerID="53232025c6be4189adec498e04e187ff98802424d26b7cc0800a8a88b76d83ae" Feb 17 09:01:21 crc kubenswrapper[4813]: E0217 09:01:21.066686 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53232025c6be4189adec498e04e187ff98802424d26b7cc0800a8a88b76d83ae\": container with ID starting with 53232025c6be4189adec498e04e187ff98802424d26b7cc0800a8a88b76d83ae not found: ID does not exist" containerID="53232025c6be4189adec498e04e187ff98802424d26b7cc0800a8a88b76d83ae" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.066719 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53232025c6be4189adec498e04e187ff98802424d26b7cc0800a8a88b76d83ae"} err="failed to get container status \"53232025c6be4189adec498e04e187ff98802424d26b7cc0800a8a88b76d83ae\": rpc error: code = NotFound desc = could not find container \"53232025c6be4189adec498e04e187ff98802424d26b7cc0800a8a88b76d83ae\": container with ID starting with 53232025c6be4189adec498e04e187ff98802424d26b7cc0800a8a88b76d83ae not found: ID does not exist" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.153966 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae2be5c-4a94-4959-9c47-591de52f3770" path="/var/lib/kubelet/pods/dae2be5c-4a94-4959-9c47-591de52f3770/volumes" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.172187 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.193294 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-ceilometer-tls-certs\") pod \"810baf22-3810-44d8-8a95-b19474b69078\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.193411 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-config-data\") pod \"810baf22-3810-44d8-8a95-b19474b69078\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.193440 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810baf22-3810-44d8-8a95-b19474b69078-run-httpd\") pod \"810baf22-3810-44d8-8a95-b19474b69078\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.193458 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-sg-core-conf-yaml\") pod \"810baf22-3810-44d8-8a95-b19474b69078\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.193477 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvqhs\" (UniqueName: \"kubernetes.io/projected/810baf22-3810-44d8-8a95-b19474b69078-kube-api-access-pvqhs\") pod \"810baf22-3810-44d8-8a95-b19474b69078\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.193515 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810baf22-3810-44d8-8a95-b19474b69078-log-httpd\") pod \"810baf22-3810-44d8-8a95-b19474b69078\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.193547 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-scripts\") pod \"810baf22-3810-44d8-8a95-b19474b69078\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.193564 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-combined-ca-bundle\") pod \"810baf22-3810-44d8-8a95-b19474b69078\" (UID: \"810baf22-3810-44d8-8a95-b19474b69078\") " Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.193729 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.193754 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.193780 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7qnk\" (UniqueName: \"kubernetes.io/projected/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-kube-api-access-x7qnk\") pod \"watcher-kuttl-api-0\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.193851 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-logs\") pod \"watcher-kuttl-api-0\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.193900 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.195381 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/810baf22-3810-44d8-8a95-b19474b69078-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "810baf22-3810-44d8-8a95-b19474b69078" (UID: "810baf22-3810-44d8-8a95-b19474b69078"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.195603 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/810baf22-3810-44d8-8a95-b19474b69078-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "810baf22-3810-44d8-8a95-b19474b69078" (UID: "810baf22-3810-44d8-8a95-b19474b69078"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.199166 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-scripts" (OuterVolumeSpecName: "scripts") pod "810baf22-3810-44d8-8a95-b19474b69078" (UID: "810baf22-3810-44d8-8a95-b19474b69078"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.199827 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810baf22-3810-44d8-8a95-b19474b69078-kube-api-access-pvqhs" (OuterVolumeSpecName: "kube-api-access-pvqhs") pod "810baf22-3810-44d8-8a95-b19474b69078" (UID: "810baf22-3810-44d8-8a95-b19474b69078"). InnerVolumeSpecName "kube-api-access-pvqhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.228953 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "810baf22-3810-44d8-8a95-b19474b69078" (UID: "810baf22-3810-44d8-8a95-b19474b69078"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.252620 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "810baf22-3810-44d8-8a95-b19474b69078" (UID: "810baf22-3810-44d8-8a95-b19474b69078"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.281557 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "810baf22-3810-44d8-8a95-b19474b69078" (UID: "810baf22-3810-44d8-8a95-b19474b69078"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.294670 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d69sn\" (UniqueName: \"kubernetes.io/projected/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-kube-api-access-d69sn\") pod \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.294724 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-config-data\") pod \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.294861 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-logs\") pod \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.294887 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-combined-ca-bundle\") pod \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.294962 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-custom-prometheus-ca\") pod \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\" (UID: \"1dacf5a3-0ba2-4329-8b57-d473f77dbf16\") " Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.295133 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.295158 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.295182 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7qnk\" (UniqueName: \"kubernetes.io/projected/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-kube-api-access-x7qnk\") pod \"watcher-kuttl-api-0\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.295272 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-logs\") pod \"watcher-kuttl-api-0\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.295330 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.295377 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810baf22-3810-44d8-8a95-b19474b69078-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.295388 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.295397 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.295408 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.295418 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/810baf22-3810-44d8-8a95-b19474b69078-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.295426 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.295435 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvqhs\" (UniqueName: \"kubernetes.io/projected/810baf22-3810-44d8-8a95-b19474b69078-kube-api-access-pvqhs\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.296299 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-logs\") pod \"watcher-kuttl-api-0\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.296915 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-logs" (OuterVolumeSpecName: "logs") pod "1dacf5a3-0ba2-4329-8b57-d473f77dbf16" (UID: "1dacf5a3-0ba2-4329-8b57-d473f77dbf16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.299178 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-kube-api-access-d69sn" (OuterVolumeSpecName: "kube-api-access-d69sn") pod "1dacf5a3-0ba2-4329-8b57-d473f77dbf16" (UID: "1dacf5a3-0ba2-4329-8b57-d473f77dbf16"). InnerVolumeSpecName "kube-api-access-d69sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.299888 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-config-data" (OuterVolumeSpecName: "config-data") pod "810baf22-3810-44d8-8a95-b19474b69078" (UID: "810baf22-3810-44d8-8a95-b19474b69078"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.300008 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.300049 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.301860 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.313070 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7qnk\" (UniqueName: \"kubernetes.io/projected/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-kube-api-access-x7qnk\") pod \"watcher-kuttl-api-0\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.321083 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "1dacf5a3-0ba2-4329-8b57-d473f77dbf16" (UID: "1dacf5a3-0ba2-4329-8b57-d473f77dbf16"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.322875 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dacf5a3-0ba2-4329-8b57-d473f77dbf16" (UID: "1dacf5a3-0ba2-4329-8b57-d473f77dbf16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.334801 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-config-data" (OuterVolumeSpecName: "config-data") pod "1dacf5a3-0ba2-4329-8b57-d473f77dbf16" (UID: "1dacf5a3-0ba2-4329-8b57-d473f77dbf16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.356210 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.397472 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.397510 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.397526 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d69sn\" (UniqueName: \"kubernetes.io/projected/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-kube-api-access-d69sn\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.397538 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.397550 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810baf22-3810-44d8-8a95-b19474b69078-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.397560 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dacf5a3-0ba2-4329-8b57-d473f77dbf16-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:21 crc kubenswrapper[4813]: E0217 09:01:21.575174 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6e3361509822b512e580c5debb412df7dfada323f5bc41e0badf0c3f3ffe8bea" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:01:21 crc kubenswrapper[4813]: E0217 09:01:21.578068 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6e3361509822b512e580c5debb412df7dfada323f5bc41e0badf0c3f3ffe8bea" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:01:21 crc kubenswrapper[4813]: E0217 09:01:21.580511 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6e3361509822b512e580c5debb412df7dfada323f5bc41e0badf0c3f3ffe8bea" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:01:21 crc kubenswrapper[4813]: E0217 09:01:21.580547 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="085554f2-90cd-4173-897d-25e5bc0dd6e2" containerName="watcher-applier" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.815717 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.933291 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c","Type":"ContainerStarted","Data":"599da4f70650246cc6f2d90c3932edaab469d1b014d9d1e5079ea6fee453db63"} Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.935572 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1dacf5a3-0ba2-4329-8b57-d473f77dbf16","Type":"ContainerDied","Data":"6ea27b5240318f3f2dd9eb581b8c9c6e2339fe854de7dbd5809d2d3810f693af"} Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.935595 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.935623 4813 scope.go:117] "RemoveContainer" containerID="3b0ba5ff9739971bffe1df0048f619b7d460005773ef4f644057f6fd1f708cde" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.948357 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"810baf22-3810-44d8-8a95-b19474b69078","Type":"ContainerDied","Data":"a76dbc348e6e6c3cb1a3debdeec7cd81f1f353f36dc251b6be8e85c078d2d95e"} Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.948455 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.974471 4813 scope.go:117] "RemoveContainer" containerID="cd058e445f6fdfe96bf46f3775951354fcd472fd55fc2a6499b783046f849439" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.983691 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.996569 4813 scope.go:117] "RemoveContainer" containerID="b00a29a225ec35654eab3bd77882bfdf9ce4493ac1803b48b81c412e5af96c07" Feb 17 09:01:21 crc kubenswrapper[4813]: I0217 09:01:21.998861 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.024580 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.031081 4813 scope.go:117] "RemoveContainer" containerID="67fdfbe879ffd42a74b10fc4d45b27c575947150e6f3607107438e9479f45da2" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.049321 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.055634 4813 scope.go:117] "RemoveContainer" containerID="6bf8ab15d9095d275302a06a5d2e200da735ea056ac90ad705004e01603d426b" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.062188 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:01:22 crc kubenswrapper[4813]: E0217 09:01:22.062515 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810baf22-3810-44d8-8a95-b19474b69078" containerName="proxy-httpd" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.062532 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="810baf22-3810-44d8-8a95-b19474b69078" containerName="proxy-httpd" Feb 17 09:01:22 crc kubenswrapper[4813]: E0217 09:01:22.062549 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810baf22-3810-44d8-8a95-b19474b69078" containerName="sg-core" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.062556 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="810baf22-3810-44d8-8a95-b19474b69078" containerName="sg-core" Feb 17 09:01:22 crc kubenswrapper[4813]: E0217 09:01:22.062573 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810baf22-3810-44d8-8a95-b19474b69078" containerName="ceilometer-notification-agent" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.062580 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="810baf22-3810-44d8-8a95-b19474b69078" containerName="ceilometer-notification-agent" Feb 17 09:01:22 crc kubenswrapper[4813]: E0217 09:01:22.062600 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dacf5a3-0ba2-4329-8b57-d473f77dbf16" containerName="watcher-decision-engine" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.062608 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dacf5a3-0ba2-4329-8b57-d473f77dbf16" containerName="watcher-decision-engine" Feb 17 09:01:22 crc kubenswrapper[4813]: E0217 09:01:22.062617 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810baf22-3810-44d8-8a95-b19474b69078" containerName="ceilometer-central-agent" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.062623 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="810baf22-3810-44d8-8a95-b19474b69078" containerName="ceilometer-central-agent" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.062760 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dacf5a3-0ba2-4329-8b57-d473f77dbf16" containerName="watcher-decision-engine" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.062772 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="810baf22-3810-44d8-8a95-b19474b69078" containerName="proxy-httpd" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.062779 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="810baf22-3810-44d8-8a95-b19474b69078" containerName="sg-core" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.062794 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="810baf22-3810-44d8-8a95-b19474b69078" containerName="ceilometer-notification-agent" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.062802 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="810baf22-3810-44d8-8a95-b19474b69078" containerName="ceilometer-central-agent" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.063573 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.068941 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.093200 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.106455 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.111102 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.112952 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.113287 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.114240 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.115458 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.215325 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47s2k\" (UniqueName: \"kubernetes.io/projected/79160a99-2d68-4803-913f-f2a0345fe683-kube-api-access-47s2k\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.215380 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.215414 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.215439 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.215456 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.215480 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.215523 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-config-data\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.215550 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.215576 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.215593 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79160a99-2d68-4803-913f-f2a0345fe683-run-httpd\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.215621 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-scripts\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.215635 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79160a99-2d68-4803-913f-f2a0345fe683-log-httpd\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.215658 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5f5s\" (UniqueName: \"kubernetes.io/projected/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-kube-api-access-w5f5s\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.317167 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.317457 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.317491 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.317512 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.317538 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.317571 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-config-data\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.317601 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.317626 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.317640 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79160a99-2d68-4803-913f-f2a0345fe683-run-httpd\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.317658 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-scripts\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.317673 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79160a99-2d68-4803-913f-f2a0345fe683-log-httpd\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.317694 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5f5s\" (UniqueName: \"kubernetes.io/projected/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-kube-api-access-w5f5s\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.317733 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47s2k\" (UniqueName: \"kubernetes.io/projected/79160a99-2d68-4803-913f-f2a0345fe683-kube-api-access-47s2k\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.318787 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79160a99-2d68-4803-913f-f2a0345fe683-log-httpd\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.318975 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79160a99-2d68-4803-913f-f2a0345fe683-run-httpd\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.319225 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.322265 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.324288 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.327098 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-scripts\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.332338 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.332462 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.332774 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-config-data\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.336081 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.336903 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.341002 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5f5s\" (UniqueName: \"kubernetes.io/projected/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-kube-api-access-w5f5s\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.342583 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47s2k\" (UniqueName: \"kubernetes.io/projected/79160a99-2d68-4803-913f-f2a0345fe683-kube-api-access-47s2k\") pod \"ceilometer-0\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.395455 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.548972 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.815872 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.957207 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c","Type":"ContainerStarted","Data":"9e8af29c96bb89a9aeba1f2edd2fc650da05ba689ea64c7ea248ce1baf12cbbe"} Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.957265 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c","Type":"ContainerStarted","Data":"2a32b449654461e83fa0aab72b6145f8506ec652c3ded6e57ba47e3324ec97ac"} Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.957453 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.960730 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0","Type":"ContainerStarted","Data":"c974aa912a4a7141be3277cc3dc64bf6fe3a68cd2312f4357d6f10ce9bb97bb2"} Feb 17 09:01:22 crc kubenswrapper[4813]: I0217 09:01:22.986432 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.986416685 podStartE2EDuration="2.986416685s" podCreationTimestamp="2026-02-17 09:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:01:22.984264654 +0000 UTC m=+1230.645025887" watchObservedRunningTime="2026-02-17 09:01:22.986416685 +0000 UTC m=+1230.647177898" Feb 17 09:01:23 crc kubenswrapper[4813]: I0217 09:01:23.037429 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:01:23 crc kubenswrapper[4813]: W0217 09:01:23.037891 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79160a99_2d68_4803_913f_f2a0345fe683.slice/crio-f48e4f3a824887699215d5b3c5500931139eebeda2e784120bc4df54fb782281 WatchSource:0}: Error finding container f48e4f3a824887699215d5b3c5500931139eebeda2e784120bc4df54fb782281: Status 404 returned error can't find the container with id f48e4f3a824887699215d5b3c5500931139eebeda2e784120bc4df54fb782281 Feb 17 09:01:23 crc kubenswrapper[4813]: I0217 09:01:23.041703 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 09:01:23 crc kubenswrapper[4813]: I0217 09:01:23.122210 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dacf5a3-0ba2-4329-8b57-d473f77dbf16" path="/var/lib/kubelet/pods/1dacf5a3-0ba2-4329-8b57-d473f77dbf16/volumes" Feb 17 09:01:23 crc kubenswrapper[4813]: I0217 09:01:23.123033 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="810baf22-3810-44d8-8a95-b19474b69078" path="/var/lib/kubelet/pods/810baf22-3810-44d8-8a95-b19474b69078/volumes" Feb 17 09:01:23 crc kubenswrapper[4813]: I0217 09:01:23.971493 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79160a99-2d68-4803-913f-f2a0345fe683","Type":"ContainerStarted","Data":"ec0177b697b1b71ba39a116782e76f2f0abcf8212b336af107f33ffc5b82fcd2"} Feb 17 09:01:23 crc kubenswrapper[4813]: I0217 09:01:23.971881 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79160a99-2d68-4803-913f-f2a0345fe683","Type":"ContainerStarted","Data":"f48e4f3a824887699215d5b3c5500931139eebeda2e784120bc4df54fb782281"} Feb 17 09:01:23 crc kubenswrapper[4813]: I0217 09:01:23.973279 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0","Type":"ContainerStarted","Data":"939ac339924702ef2eb7c91ba723cf45a471e28dc9dba1eea6637f05102dbae8"} Feb 17 09:01:23 crc kubenswrapper[4813]: I0217 09:01:23.995448 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.99543211 podStartE2EDuration="2.99543211s" podCreationTimestamp="2026-02-17 09:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:01:23.989119201 +0000 UTC m=+1231.649880424" watchObservedRunningTime="2026-02-17 09:01:23.99543211 +0000 UTC m=+1231.656193333" Feb 17 09:01:24 crc kubenswrapper[4813]: I0217 09:01:24.929067 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:24 crc kubenswrapper[4813]: I0217 09:01:24.993425 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79160a99-2d68-4803-913f-f2a0345fe683","Type":"ContainerStarted","Data":"9f5c5bcb5cfda471a7106638cbb797c5657ef028fcdfe94cd727997aef64b1b4"} Feb 17 09:01:24 crc kubenswrapper[4813]: I0217 09:01:24.995450 4813 generic.go:334] "Generic (PLEG): container finished" podID="085554f2-90cd-4173-897d-25e5bc0dd6e2" containerID="6e3361509822b512e580c5debb412df7dfada323f5bc41e0badf0c3f3ffe8bea" exitCode=0 Feb 17 09:01:24 crc kubenswrapper[4813]: I0217 09:01:24.995500 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"085554f2-90cd-4173-897d-25e5bc0dd6e2","Type":"ContainerDied","Data":"6e3361509822b512e580c5debb412df7dfada323f5bc41e0badf0c3f3ffe8bea"} Feb 17 09:01:24 crc kubenswrapper[4813]: I0217 09:01:24.995538 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"085554f2-90cd-4173-897d-25e5bc0dd6e2","Type":"ContainerDied","Data":"d7826d5420000bdfb13e22d4692c2c4184b5d92bda919720cf53cd05520b7973"} Feb 17 09:01:24 crc kubenswrapper[4813]: I0217 09:01:24.995540 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:24 crc kubenswrapper[4813]: I0217 09:01:24.995561 4813 scope.go:117] "RemoveContainer" containerID="6e3361509822b512e580c5debb412df7dfada323f5bc41e0badf0c3f3ffe8bea" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.027426 4813 scope.go:117] "RemoveContainer" containerID="6e3361509822b512e580c5debb412df7dfada323f5bc41e0badf0c3f3ffe8bea" Feb 17 09:01:25 crc kubenswrapper[4813]: E0217 09:01:25.031402 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e3361509822b512e580c5debb412df7dfada323f5bc41e0badf0c3f3ffe8bea\": container with ID starting with 6e3361509822b512e580c5debb412df7dfada323f5bc41e0badf0c3f3ffe8bea not found: ID does not exist" containerID="6e3361509822b512e580c5debb412df7dfada323f5bc41e0badf0c3f3ffe8bea" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.031437 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e3361509822b512e580c5debb412df7dfada323f5bc41e0badf0c3f3ffe8bea"} err="failed to get container status \"6e3361509822b512e580c5debb412df7dfada323f5bc41e0badf0c3f3ffe8bea\": rpc error: code = NotFound desc = could not find container \"6e3361509822b512e580c5debb412df7dfada323f5bc41e0badf0c3f3ffe8bea\": container with ID starting with 6e3361509822b512e580c5debb412df7dfada323f5bc41e0badf0c3f3ffe8bea not found: ID does not exist" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.064737 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb97r\" (UniqueName: \"kubernetes.io/projected/085554f2-90cd-4173-897d-25e5bc0dd6e2-kube-api-access-cb97r\") pod \"085554f2-90cd-4173-897d-25e5bc0dd6e2\" (UID: \"085554f2-90cd-4173-897d-25e5bc0dd6e2\") " Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.064867 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085554f2-90cd-4173-897d-25e5bc0dd6e2-combined-ca-bundle\") pod \"085554f2-90cd-4173-897d-25e5bc0dd6e2\" (UID: \"085554f2-90cd-4173-897d-25e5bc0dd6e2\") " Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.064924 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/085554f2-90cd-4173-897d-25e5bc0dd6e2-logs\") pod \"085554f2-90cd-4173-897d-25e5bc0dd6e2\" (UID: \"085554f2-90cd-4173-897d-25e5bc0dd6e2\") " Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.065021 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085554f2-90cd-4173-897d-25e5bc0dd6e2-config-data\") pod \"085554f2-90cd-4173-897d-25e5bc0dd6e2\" (UID: \"085554f2-90cd-4173-897d-25e5bc0dd6e2\") " Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.065382 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/085554f2-90cd-4173-897d-25e5bc0dd6e2-logs" (OuterVolumeSpecName: "logs") pod "085554f2-90cd-4173-897d-25e5bc0dd6e2" (UID: "085554f2-90cd-4173-897d-25e5bc0dd6e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.065778 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/085554f2-90cd-4173-897d-25e5bc0dd6e2-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.069450 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085554f2-90cd-4173-897d-25e5bc0dd6e2-kube-api-access-cb97r" (OuterVolumeSpecName: "kube-api-access-cb97r") pod "085554f2-90cd-4173-897d-25e5bc0dd6e2" (UID: "085554f2-90cd-4173-897d-25e5bc0dd6e2"). InnerVolumeSpecName "kube-api-access-cb97r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.090665 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085554f2-90cd-4173-897d-25e5bc0dd6e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "085554f2-90cd-4173-897d-25e5bc0dd6e2" (UID: "085554f2-90cd-4173-897d-25e5bc0dd6e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.104324 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085554f2-90cd-4173-897d-25e5bc0dd6e2-config-data" (OuterVolumeSpecName: "config-data") pod "085554f2-90cd-4173-897d-25e5bc0dd6e2" (UID: "085554f2-90cd-4173-897d-25e5bc0dd6e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.167862 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/085554f2-90cd-4173-897d-25e5bc0dd6e2-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.167886 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb97r\" (UniqueName: \"kubernetes.io/projected/085554f2-90cd-4173-897d-25e5bc0dd6e2-kube-api-access-cb97r\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.167896 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085554f2-90cd-4173-897d-25e5bc0dd6e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.317535 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.326517 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.331739 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:01:25 crc kubenswrapper[4813]: E0217 09:01:25.332050 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085554f2-90cd-4173-897d-25e5bc0dd6e2" containerName="watcher-applier" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.332066 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="085554f2-90cd-4173-897d-25e5bc0dd6e2" containerName="watcher-applier" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.332214 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="085554f2-90cd-4173-897d-25e5bc0dd6e2" containerName="watcher-applier" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.332758 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.335448 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.344204 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.458936 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.472361 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf6tz\" (UniqueName: \"kubernetes.io/projected/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-kube-api-access-rf6tz\") pod \"watcher-kuttl-applier-0\" (UID: \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.472411 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.472431 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.472496 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.573383 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.573714 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf6tz\" (UniqueName: \"kubernetes.io/projected/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-kube-api-access-rf6tz\") pod \"watcher-kuttl-applier-0\" (UID: \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.573745 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.573776 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.574109 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.577742 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.578200 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.590869 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf6tz\" (UniqueName: \"kubernetes.io/projected/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-kube-api-access-rf6tz\") pod \"watcher-kuttl-applier-0\" (UID: \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:25 crc kubenswrapper[4813]: I0217 09:01:25.658681 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:26 crc kubenswrapper[4813]: I0217 09:01:26.031267 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79160a99-2d68-4803-913f-f2a0345fe683","Type":"ContainerStarted","Data":"d91150263ed825ebfbc9014005e8a7083a6ef2f5678306fbdfb0de2a1a715c32"} Feb 17 09:01:26 crc kubenswrapper[4813]: I0217 09:01:26.155593 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:01:26 crc kubenswrapper[4813]: I0217 09:01:26.357463 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:27 crc kubenswrapper[4813]: I0217 09:01:27.042767 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb","Type":"ContainerStarted","Data":"0270afc93fecdaff39f27c066bf0a6c4d261b18a9b8a211ddc42a77b674f42aa"} Feb 17 09:01:27 crc kubenswrapper[4813]: I0217 09:01:27.043129 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb","Type":"ContainerStarted","Data":"c54e2892354307388235282cfadc5dcbc6d985a21eacb2bf6365cf8f74265f7a"} Feb 17 09:01:27 crc kubenswrapper[4813]: I0217 09:01:27.048741 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79160a99-2d68-4803-913f-f2a0345fe683","Type":"ContainerStarted","Data":"3f3b36dfb54df4fad7fbbc64ba809e0f87fa855ce57ac1d0b31e4a890a11409d"} Feb 17 09:01:27 crc kubenswrapper[4813]: I0217 09:01:27.049747 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:27 crc kubenswrapper[4813]: I0217 09:01:27.064575 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.064559593 podStartE2EDuration="2.064559593s" podCreationTimestamp="2026-02-17 09:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:01:27.060855548 +0000 UTC m=+1234.721616781" watchObservedRunningTime="2026-02-17 09:01:27.064559593 +0000 UTC m=+1234.725320826" Feb 17 09:01:27 crc kubenswrapper[4813]: I0217 09:01:27.104757 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.160787156 podStartE2EDuration="5.104730926s" podCreationTimestamp="2026-02-17 09:01:22 +0000 UTC" firstStartedPulling="2026-02-17 09:01:23.041422451 +0000 UTC m=+1230.702183684" lastFinishedPulling="2026-02-17 09:01:25.985366231 +0000 UTC m=+1233.646127454" observedRunningTime="2026-02-17 09:01:27.097175611 +0000 UTC m=+1234.757936834" watchObservedRunningTime="2026-02-17 09:01:27.104730926 +0000 UTC m=+1234.765492159" Feb 17 09:01:27 crc kubenswrapper[4813]: I0217 09:01:27.123115 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="085554f2-90cd-4173-897d-25e5bc0dd6e2" path="/var/lib/kubelet/pods/085554f2-90cd-4173-897d-25e5bc0dd6e2/volumes" Feb 17 09:01:30 crc kubenswrapper[4813]: I0217 09:01:30.659856 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:31 crc kubenswrapper[4813]: I0217 09:01:31.356966 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:31 crc kubenswrapper[4813]: I0217 09:01:31.373832 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:32 crc kubenswrapper[4813]: I0217 09:01:32.125946 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:32 crc kubenswrapper[4813]: I0217 09:01:32.395799 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:32 crc kubenswrapper[4813]: I0217 09:01:32.425298 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:33 crc kubenswrapper[4813]: I0217 09:01:33.133529 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:33 crc kubenswrapper[4813]: I0217 09:01:33.174383 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:35 crc kubenswrapper[4813]: I0217 09:01:35.659653 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:35 crc kubenswrapper[4813]: I0217 09:01:35.696138 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:36 crc kubenswrapper[4813]: I0217 09:01:36.207810 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:42 crc kubenswrapper[4813]: I0217 09:01:42.646933 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:01:42 crc kubenswrapper[4813]: I0217 09:01:42.647871 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0" containerName="watcher-decision-engine" containerID="cri-o://939ac339924702ef2eb7c91ba723cf45a471e28dc9dba1eea6637f05102dbae8" gracePeriod=30 Feb 17 09:01:42 crc kubenswrapper[4813]: I0217 09:01:42.715086 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:01:42 crc kubenswrapper[4813]: I0217 09:01:42.715571 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb" containerName="watcher-applier" containerID="cri-o://0270afc93fecdaff39f27c066bf0a6c4d261b18a9b8a211ddc42a77b674f42aa" gracePeriod=30 Feb 17 09:01:42 crc kubenswrapper[4813]: I0217 09:01:42.747055 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:01:42 crc kubenswrapper[4813]: I0217 09:01:42.747759 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c" containerName="watcher-kuttl-api-log" containerID="cri-o://2a32b449654461e83fa0aab72b6145f8506ec652c3ded6e57ba47e3324ec97ac" gracePeriod=30 Feb 17 09:01:42 crc kubenswrapper[4813]: I0217 09:01:42.748329 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c" containerName="watcher-api" containerID="cri-o://9e8af29c96bb89a9aeba1f2edd2fc650da05ba689ea64c7ea248ce1baf12cbbe" gracePeriod=30 Feb 17 09:01:42 crc kubenswrapper[4813]: E0217 09:01:42.790264 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b5b293b_62d2_4c6a_95a0_caaa4aaa2f6c.slice/crio-2a32b449654461e83fa0aab72b6145f8506ec652c3ded6e57ba47e3324ec97ac.scope\": RecentStats: unable to find data in memory cache]" Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.228952 4813 generic.go:334] "Generic (PLEG): container finished" podID="8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c" containerID="2a32b449654461e83fa0aab72b6145f8506ec652c3ded6e57ba47e3324ec97ac" exitCode=143 Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.228997 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c","Type":"ContainerDied","Data":"2a32b449654461e83fa0aab72b6145f8506ec652c3ded6e57ba47e3324ec97ac"} Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.644070 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.688094 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-logs\") pod \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.688155 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-combined-ca-bundle\") pod \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.688205 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-config-data\") pod \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.688270 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-custom-prometheus-ca\") pod \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.688418 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7qnk\" (UniqueName: \"kubernetes.io/projected/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-kube-api-access-x7qnk\") pod \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\" (UID: \"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c\") " Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.688870 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-logs" (OuterVolumeSpecName: "logs") pod "8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c" (UID: "8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.699877 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-kube-api-access-x7qnk" (OuterVolumeSpecName: "kube-api-access-x7qnk") pod "8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c" (UID: "8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c"). InnerVolumeSpecName "kube-api-access-x7qnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.719143 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c" (UID: "8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.729777 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c" (UID: "8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.765347 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-config-data" (OuterVolumeSpecName: "config-data") pod "8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c" (UID: "8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.790250 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.790290 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.790303 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.790324 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:43 crc kubenswrapper[4813]: I0217 09:01:43.790334 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7qnk\" (UniqueName: \"kubernetes.io/projected/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c-kube-api-access-x7qnk\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.237982 4813 generic.go:334] "Generic (PLEG): container finished" podID="8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c" containerID="9e8af29c96bb89a9aeba1f2edd2fc650da05ba689ea64c7ea248ce1baf12cbbe" exitCode=0 Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.238027 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c","Type":"ContainerDied","Data":"9e8af29c96bb89a9aeba1f2edd2fc650da05ba689ea64c7ea248ce1baf12cbbe"} Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.238074 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c","Type":"ContainerDied","Data":"599da4f70650246cc6f2d90c3932edaab469d1b014d9d1e5079ea6fee453db63"} Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.238103 4813 scope.go:117] "RemoveContainer" containerID="9e8af29c96bb89a9aeba1f2edd2fc650da05ba689ea64c7ea248ce1baf12cbbe" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.238477 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.259583 4813 scope.go:117] "RemoveContainer" containerID="2a32b449654461e83fa0aab72b6145f8506ec652c3ded6e57ba47e3324ec97ac" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.277573 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.281065 4813 scope.go:117] "RemoveContainer" containerID="9e8af29c96bb89a9aeba1f2edd2fc650da05ba689ea64c7ea248ce1baf12cbbe" Feb 17 09:01:44 crc kubenswrapper[4813]: E0217 09:01:44.281561 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e8af29c96bb89a9aeba1f2edd2fc650da05ba689ea64c7ea248ce1baf12cbbe\": container with ID starting with 9e8af29c96bb89a9aeba1f2edd2fc650da05ba689ea64c7ea248ce1baf12cbbe not found: ID does not exist" containerID="9e8af29c96bb89a9aeba1f2edd2fc650da05ba689ea64c7ea248ce1baf12cbbe" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.281598 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e8af29c96bb89a9aeba1f2edd2fc650da05ba689ea64c7ea248ce1baf12cbbe"} err="failed to get container status \"9e8af29c96bb89a9aeba1f2edd2fc650da05ba689ea64c7ea248ce1baf12cbbe\": rpc error: code = NotFound desc = could not find container \"9e8af29c96bb89a9aeba1f2edd2fc650da05ba689ea64c7ea248ce1baf12cbbe\": container with ID starting with 9e8af29c96bb89a9aeba1f2edd2fc650da05ba689ea64c7ea248ce1baf12cbbe not found: ID does not exist" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.281623 4813 scope.go:117] "RemoveContainer" containerID="2a32b449654461e83fa0aab72b6145f8506ec652c3ded6e57ba47e3324ec97ac" Feb 17 09:01:44 crc kubenswrapper[4813]: E0217 09:01:44.282024 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a32b449654461e83fa0aab72b6145f8506ec652c3ded6e57ba47e3324ec97ac\": container with ID starting with 2a32b449654461e83fa0aab72b6145f8506ec652c3ded6e57ba47e3324ec97ac not found: ID does not exist" containerID="2a32b449654461e83fa0aab72b6145f8506ec652c3ded6e57ba47e3324ec97ac" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.282063 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a32b449654461e83fa0aab72b6145f8506ec652c3ded6e57ba47e3324ec97ac"} err="failed to get container status \"2a32b449654461e83fa0aab72b6145f8506ec652c3ded6e57ba47e3324ec97ac\": rpc error: code = NotFound desc = could not find container \"2a32b449654461e83fa0aab72b6145f8506ec652c3ded6e57ba47e3324ec97ac\": container with ID starting with 2a32b449654461e83fa0aab72b6145f8506ec652c3ded6e57ba47e3324ec97ac not found: ID does not exist" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.286461 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.303379 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:01:44 crc kubenswrapper[4813]: E0217 09:01:44.303690 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c" containerName="watcher-api" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.303705 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c" containerName="watcher-api" Feb 17 09:01:44 crc kubenswrapper[4813]: E0217 09:01:44.303720 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c" containerName="watcher-kuttl-api-log" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.303726 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c" containerName="watcher-kuttl-api-log" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.303890 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c" containerName="watcher-kuttl-api-log" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.303903 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c" containerName="watcher-api" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.304700 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.307278 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.328825 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.398962 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/270f066c-48f3-4f57-a70b-bf22df03c035-logs\") pod \"watcher-kuttl-api-0\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.399015 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.399079 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.399097 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.399345 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s8xr\" (UniqueName: \"kubernetes.io/projected/270f066c-48f3-4f57-a70b-bf22df03c035-kube-api-access-8s8xr\") pod \"watcher-kuttl-api-0\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.501140 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s8xr\" (UniqueName: \"kubernetes.io/projected/270f066c-48f3-4f57-a70b-bf22df03c035-kube-api-access-8s8xr\") pod \"watcher-kuttl-api-0\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.501219 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/270f066c-48f3-4f57-a70b-bf22df03c035-logs\") pod \"watcher-kuttl-api-0\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.501259 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.501349 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.501371 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.501628 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/270f066c-48f3-4f57-a70b-bf22df03c035-logs\") pod \"watcher-kuttl-api-0\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.506953 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.507291 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.512125 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.519576 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s8xr\" (UniqueName: \"kubernetes.io/projected/270f066c-48f3-4f57-a70b-bf22df03c035-kube-api-access-8s8xr\") pod \"watcher-kuttl-api-0\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:44 crc kubenswrapper[4813]: I0217 09:01:44.635675 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:45 crc kubenswrapper[4813]: I0217 09:01:45.128982 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c" path="/var/lib/kubelet/pods/8b5b293b-62d2-4c6a-95a0-caaa4aaa2f6c/volumes" Feb 17 09:01:45 crc kubenswrapper[4813]: I0217 09:01:45.178445 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:01:45 crc kubenswrapper[4813]: W0217 09:01:45.182031 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod270f066c_48f3_4f57_a70b_bf22df03c035.slice/crio-4cb5a67523039dbe7c5590b58ad600d4d32d7883bb5c105cb5c1f9c3991b69b5 WatchSource:0}: Error finding container 4cb5a67523039dbe7c5590b58ad600d4d32d7883bb5c105cb5c1f9c3991b69b5: Status 404 returned error can't find the container with id 4cb5a67523039dbe7c5590b58ad600d4d32d7883bb5c105cb5c1f9c3991b69b5 Feb 17 09:01:45 crc kubenswrapper[4813]: I0217 09:01:45.247024 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"270f066c-48f3-4f57-a70b-bf22df03c035","Type":"ContainerStarted","Data":"4cb5a67523039dbe7c5590b58ad600d4d32d7883bb5c105cb5c1f9c3991b69b5"} Feb 17 09:01:45 crc kubenswrapper[4813]: E0217 09:01:45.660911 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0270afc93fecdaff39f27c066bf0a6c4d261b18a9b8a211ddc42a77b674f42aa" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:01:45 crc kubenswrapper[4813]: E0217 09:01:45.662642 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0270afc93fecdaff39f27c066bf0a6c4d261b18a9b8a211ddc42a77b674f42aa" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:01:45 crc kubenswrapper[4813]: E0217 09:01:45.663976 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0270afc93fecdaff39f27c066bf0a6c4d261b18a9b8a211ddc42a77b674f42aa" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:01:45 crc kubenswrapper[4813]: E0217 09:01:45.664013 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb" containerName="watcher-applier" Feb 17 09:01:46 crc kubenswrapper[4813]: I0217 09:01:46.262492 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"270f066c-48f3-4f57-a70b-bf22df03c035","Type":"ContainerStarted","Data":"332feb24d7d577336954110e08a5690fef1bc14f1d9223bc596bdadcc86a9e59"} Feb 17 09:01:46 crc kubenswrapper[4813]: I0217 09:01:46.263397 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"270f066c-48f3-4f57-a70b-bf22df03c035","Type":"ContainerStarted","Data":"3f5c61850e22ae42c2cf0741a226608c2b6424231d91dbb1d24f0607dc75c081"} Feb 17 09:01:46 crc kubenswrapper[4813]: I0217 09:01:46.263443 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:46 crc kubenswrapper[4813]: I0217 09:01:46.303684 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.303655168 podStartE2EDuration="2.303655168s" podCreationTimestamp="2026-02-17 09:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:01:46.289353611 +0000 UTC m=+1253.950114874" watchObservedRunningTime="2026-02-17 09:01:46.303655168 +0000 UTC m=+1253.964416421" Feb 17 09:01:47 crc kubenswrapper[4813]: I0217 09:01:47.728786 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:47 crc kubenswrapper[4813]: I0217 09:01:47.765903 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf6tz\" (UniqueName: \"kubernetes.io/projected/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-kube-api-access-rf6tz\") pod \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\" (UID: \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\") " Feb 17 09:01:47 crc kubenswrapper[4813]: I0217 09:01:47.765962 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-logs\") pod \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\" (UID: \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\") " Feb 17 09:01:47 crc kubenswrapper[4813]: I0217 09:01:47.766001 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-combined-ca-bundle\") pod \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\" (UID: \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\") " Feb 17 09:01:47 crc kubenswrapper[4813]: I0217 09:01:47.766083 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-config-data\") pod \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\" (UID: \"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb\") " Feb 17 09:01:47 crc kubenswrapper[4813]: I0217 09:01:47.787994 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-logs" (OuterVolumeSpecName: "logs") pod "e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb" (UID: "e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:01:47 crc kubenswrapper[4813]: I0217 09:01:47.788420 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-kube-api-access-rf6tz" (OuterVolumeSpecName: "kube-api-access-rf6tz") pod "e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb" (UID: "e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb"). InnerVolumeSpecName "kube-api-access-rf6tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:01:47 crc kubenswrapper[4813]: I0217 09:01:47.816979 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb" (UID: "e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:47 crc kubenswrapper[4813]: I0217 09:01:47.824377 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-config-data" (OuterVolumeSpecName: "config-data") pod "e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb" (UID: "e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:47 crc kubenswrapper[4813]: I0217 09:01:47.867813 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:47 crc kubenswrapper[4813]: I0217 09:01:47.867855 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf6tz\" (UniqueName: \"kubernetes.io/projected/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-kube-api-access-rf6tz\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:47 crc kubenswrapper[4813]: I0217 09:01:47.867868 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:47 crc kubenswrapper[4813]: I0217 09:01:47.867880 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.280475 4813 generic.go:334] "Generic (PLEG): container finished" podID="e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb" containerID="0270afc93fecdaff39f27c066bf0a6c4d261b18a9b8a211ddc42a77b674f42aa" exitCode=0 Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.280511 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb","Type":"ContainerDied","Data":"0270afc93fecdaff39f27c066bf0a6c4d261b18a9b8a211ddc42a77b674f42aa"} Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.280536 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb","Type":"ContainerDied","Data":"c54e2892354307388235282cfadc5dcbc6d985a21eacb2bf6365cf8f74265f7a"} Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.280553 4813 scope.go:117] "RemoveContainer" containerID="0270afc93fecdaff39f27c066bf0a6c4d261b18a9b8a211ddc42a77b674f42aa" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.280563 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.322045 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.331056 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.338459 4813 scope.go:117] "RemoveContainer" containerID="0270afc93fecdaff39f27c066bf0a6c4d261b18a9b8a211ddc42a77b674f42aa" Feb 17 09:01:48 crc kubenswrapper[4813]: E0217 09:01:48.338938 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0270afc93fecdaff39f27c066bf0a6c4d261b18a9b8a211ddc42a77b674f42aa\": container with ID starting with 0270afc93fecdaff39f27c066bf0a6c4d261b18a9b8a211ddc42a77b674f42aa not found: ID does not exist" containerID="0270afc93fecdaff39f27c066bf0a6c4d261b18a9b8a211ddc42a77b674f42aa" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.338996 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0270afc93fecdaff39f27c066bf0a6c4d261b18a9b8a211ddc42a77b674f42aa"} err="failed to get container status \"0270afc93fecdaff39f27c066bf0a6c4d261b18a9b8a211ddc42a77b674f42aa\": rpc error: code = NotFound desc = could not find container \"0270afc93fecdaff39f27c066bf0a6c4d261b18a9b8a211ddc42a77b674f42aa\": container with ID starting with 0270afc93fecdaff39f27c066bf0a6c4d261b18a9b8a211ddc42a77b674f42aa not found: ID does not exist" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.341794 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:01:48 crc kubenswrapper[4813]: E0217 09:01:48.342196 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb" containerName="watcher-applier" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.342271 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb" containerName="watcher-applier" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.342685 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb" containerName="watcher-applier" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.345339 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.348726 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.356441 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.379168 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a42faa8-3979-4ca1-8fba-b213170a2e7b-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.379207 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a42faa8-3979-4ca1-8fba-b213170a2e7b-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.379239 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a42faa8-3979-4ca1-8fba-b213170a2e7b-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.379333 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bbz4\" (UniqueName: \"kubernetes.io/projected/7a42faa8-3979-4ca1-8fba-b213170a2e7b-kube-api-access-9bbz4\") pod \"watcher-kuttl-applier-0\" (UID: \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.432704 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.480396 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a42faa8-3979-4ca1-8fba-b213170a2e7b-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.480442 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a42faa8-3979-4ca1-8fba-b213170a2e7b-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.480472 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a42faa8-3979-4ca1-8fba-b213170a2e7b-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.480570 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bbz4\" (UniqueName: \"kubernetes.io/projected/7a42faa8-3979-4ca1-8fba-b213170a2e7b-kube-api-access-9bbz4\") pod \"watcher-kuttl-applier-0\" (UID: \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.480861 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a42faa8-3979-4ca1-8fba-b213170a2e7b-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.488001 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a42faa8-3979-4ca1-8fba-b213170a2e7b-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.491587 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a42faa8-3979-4ca1-8fba-b213170a2e7b-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.512083 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bbz4\" (UniqueName: \"kubernetes.io/projected/7a42faa8-3979-4ca1-8fba-b213170a2e7b-kube-api-access-9bbz4\") pod \"watcher-kuttl-applier-0\" (UID: \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:48 crc kubenswrapper[4813]: I0217 09:01:48.684914 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:49 crc kubenswrapper[4813]: I0217 09:01:49.123017 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb" path="/var/lib/kubelet/pods/e58c6fa2-d9ac-4e40-9fb9-8436a0dbfcfb/volumes" Feb 17 09:01:49 crc kubenswrapper[4813]: I0217 09:01:49.172702 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:01:49 crc kubenswrapper[4813]: W0217 09:01:49.173548 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a42faa8_3979_4ca1_8fba_b213170a2e7b.slice/crio-1e70ed7e71ce71542fb773033df9fe482f4b7d59f1ca252fd5c5d97cf5821cad WatchSource:0}: Error finding container 1e70ed7e71ce71542fb773033df9fe482f4b7d59f1ca252fd5c5d97cf5821cad: Status 404 returned error can't find the container with id 1e70ed7e71ce71542fb773033df9fe482f4b7d59f1ca252fd5c5d97cf5821cad Feb 17 09:01:49 crc kubenswrapper[4813]: I0217 09:01:49.297446 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"7a42faa8-3979-4ca1-8fba-b213170a2e7b","Type":"ContainerStarted","Data":"1e70ed7e71ce71542fb773033df9fe482f4b7d59f1ca252fd5c5d97cf5821cad"} Feb 17 09:01:49 crc kubenswrapper[4813]: I0217 09:01:49.636702 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.219841 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.315104 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-custom-prometheus-ca\") pod \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.315600 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-logs\") pod \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.315678 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-config-data\") pod \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.315705 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-combined-ca-bundle\") pod \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.315751 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5f5s\" (UniqueName: \"kubernetes.io/projected/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-kube-api-access-w5f5s\") pod \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\" (UID: \"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0\") " Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.318048 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-logs" (OuterVolumeSpecName: "logs") pod "e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0" (UID: "e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.319572 4813 generic.go:334] "Generic (PLEG): container finished" podID="e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0" containerID="939ac339924702ef2eb7c91ba723cf45a471e28dc9dba1eea6637f05102dbae8" exitCode=0 Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.319632 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0","Type":"ContainerDied","Data":"939ac339924702ef2eb7c91ba723cf45a471e28dc9dba1eea6637f05102dbae8"} Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.319658 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0","Type":"ContainerDied","Data":"c974aa912a4a7141be3277cc3dc64bf6fe3a68cd2312f4357d6f10ce9bb97bb2"} Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.319673 4813 scope.go:117] "RemoveContainer" containerID="939ac339924702ef2eb7c91ba723cf45a471e28dc9dba1eea6637f05102dbae8" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.319719 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-kube-api-access-w5f5s" (OuterVolumeSpecName: "kube-api-access-w5f5s") pod "e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0" (UID: "e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0"). InnerVolumeSpecName "kube-api-access-w5f5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.319786 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.322732 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"7a42faa8-3979-4ca1-8fba-b213170a2e7b","Type":"ContainerStarted","Data":"bf95c184ba713ef160c7c1f84e0369e094e10edf34680594c61c9f609d27b3e4"} Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.349646 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0" (UID: "e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.351503 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.351483292 podStartE2EDuration="2.351483292s" podCreationTimestamp="2026-02-17 09:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:01:50.347803778 +0000 UTC m=+1258.008565001" watchObservedRunningTime="2026-02-17 09:01:50.351483292 +0000 UTC m=+1258.012244515" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.351849 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0" (UID: "e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.363721 4813 scope.go:117] "RemoveContainer" containerID="939ac339924702ef2eb7c91ba723cf45a471e28dc9dba1eea6637f05102dbae8" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.363887 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-config-data" (OuterVolumeSpecName: "config-data") pod "e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0" (UID: "e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:01:50 crc kubenswrapper[4813]: E0217 09:01:50.364658 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"939ac339924702ef2eb7c91ba723cf45a471e28dc9dba1eea6637f05102dbae8\": container with ID starting with 939ac339924702ef2eb7c91ba723cf45a471e28dc9dba1eea6637f05102dbae8 not found: ID does not exist" containerID="939ac339924702ef2eb7c91ba723cf45a471e28dc9dba1eea6637f05102dbae8" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.364719 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"939ac339924702ef2eb7c91ba723cf45a471e28dc9dba1eea6637f05102dbae8"} err="failed to get container status \"939ac339924702ef2eb7c91ba723cf45a471e28dc9dba1eea6637f05102dbae8\": rpc error: code = NotFound desc = could not find container \"939ac339924702ef2eb7c91ba723cf45a471e28dc9dba1eea6637f05102dbae8\": container with ID starting with 939ac339924702ef2eb7c91ba723cf45a471e28dc9dba1eea6637f05102dbae8 not found: ID does not exist" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.418037 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5f5s\" (UniqueName: \"kubernetes.io/projected/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-kube-api-access-w5f5s\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.418083 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.418097 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.418108 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.418120 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.652828 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.661936 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.678114 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:01:50 crc kubenswrapper[4813]: E0217 09:01:50.678444 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0" containerName="watcher-decision-engine" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.678460 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0" containerName="watcher-decision-engine" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.678615 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0" containerName="watcher-decision-engine" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.679141 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.680871 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.701710 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.723463 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe254b04-57c7-42af-a1e3-c3a36a610fc2-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.723938 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.724034 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.724135 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8tjv\" (UniqueName: \"kubernetes.io/projected/fe254b04-57c7-42af-a1e3-c3a36a610fc2-kube-api-access-n8tjv\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.724265 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.825736 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe254b04-57c7-42af-a1e3-c3a36a610fc2-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.825827 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.825847 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.825869 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8tjv\" (UniqueName: \"kubernetes.io/projected/fe254b04-57c7-42af-a1e3-c3a36a610fc2-kube-api-access-n8tjv\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.825897 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.827988 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe254b04-57c7-42af-a1e3-c3a36a610fc2-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.830219 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.832816 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.841280 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.860634 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8tjv\" (UniqueName: \"kubernetes.io/projected/fe254b04-57c7-42af-a1e3-c3a36a610fc2-kube-api-access-n8tjv\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:50 crc kubenswrapper[4813]: I0217 09:01:50.998027 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:01:51 crc kubenswrapper[4813]: I0217 09:01:51.127096 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0" path="/var/lib/kubelet/pods/e4ea675a-923f-4b7c-8c9c-dc3e44cc16e0/volumes" Feb 17 09:01:51 crc kubenswrapper[4813]: I0217 09:01:51.543928 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:01:51 crc kubenswrapper[4813]: W0217 09:01:51.551411 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe254b04_57c7_42af_a1e3_c3a36a610fc2.slice/crio-24d658edaa32d0aeea9b0b816be040ee92ce62e44a62dc4ee9501e74e89541d4 WatchSource:0}: Error finding container 24d658edaa32d0aeea9b0b816be040ee92ce62e44a62dc4ee9501e74e89541d4: Status 404 returned error can't find the container with id 24d658edaa32d0aeea9b0b816be040ee92ce62e44a62dc4ee9501e74e89541d4 Feb 17 09:01:52 crc kubenswrapper[4813]: I0217 09:01:52.347667 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"fe254b04-57c7-42af-a1e3-c3a36a610fc2","Type":"ContainerStarted","Data":"b9d42836cd13de4c7363fd4eb091d772860977bb0c0522f8f8da893e78f6f519"} Feb 17 09:01:52 crc kubenswrapper[4813]: I0217 09:01:52.348058 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"fe254b04-57c7-42af-a1e3-c3a36a610fc2","Type":"ContainerStarted","Data":"24d658edaa32d0aeea9b0b816be040ee92ce62e44a62dc4ee9501e74e89541d4"} Feb 17 09:01:52 crc kubenswrapper[4813]: I0217 09:01:52.379750 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.379721404 podStartE2EDuration="2.379721404s" podCreationTimestamp="2026-02-17 09:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:01:52.372743245 +0000 UTC m=+1260.033504508" watchObservedRunningTime="2026-02-17 09:01:52.379721404 +0000 UTC m=+1260.040482657" Feb 17 09:01:52 crc kubenswrapper[4813]: I0217 09:01:52.570503 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:01:53 crc kubenswrapper[4813]: I0217 09:01:53.685737 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:54 crc kubenswrapper[4813]: I0217 09:01:54.636586 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:54 crc kubenswrapper[4813]: I0217 09:01:54.640019 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:55 crc kubenswrapper[4813]: I0217 09:01:55.379929 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:01:58 crc kubenswrapper[4813]: I0217 09:01:58.686223 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:58 crc kubenswrapper[4813]: I0217 09:01:58.729249 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:01:59 crc kubenswrapper[4813]: I0217 09:01:59.436419 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:00 crc kubenswrapper[4813]: I0217 09:02:00.998801 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:01 crc kubenswrapper[4813]: I0217 09:02:01.028923 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:01 crc kubenswrapper[4813]: I0217 09:02:01.438943 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:01 crc kubenswrapper[4813]: I0217 09:02:01.597343 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.306794 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7"] Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.319532 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-z5bm7"] Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.390626 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.438184 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher1ec2-account-delete-s22r4"] Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.439833 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher1ec2-account-delete-s22r4" Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.454568 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.454826 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="270f066c-48f3-4f57-a70b-bf22df03c035" containerName="watcher-kuttl-api-log" containerID="cri-o://3f5c61850e22ae42c2cf0741a226608c2b6424231d91dbb1d24f0607dc75c081" gracePeriod=30 Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.454921 4813 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-qksfh\" not found" Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.454966 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="270f066c-48f3-4f57-a70b-bf22df03c035" containerName="watcher-api" containerID="cri-o://332feb24d7d577336954110e08a5690fef1bc14f1d9223bc596bdadcc86a9e59" gracePeriod=30 Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.471899 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher1ec2-account-delete-s22r4"] Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.541415 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.541614 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="7a42faa8-3979-4ca1-8fba-b213170a2e7b" containerName="watcher-applier" containerID="cri-o://bf95c184ba713ef160c7c1f84e0369e094e10edf34680594c61c9f609d27b3e4" gracePeriod=30 Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.637011 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg44n\" (UniqueName: \"kubernetes.io/projected/dccb3c67-b2f2-49b0-9713-d45599d2ca09-kube-api-access-dg44n\") pod \"watcher1ec2-account-delete-s22r4\" (UID: \"dccb3c67-b2f2-49b0-9713-d45599d2ca09\") " pod="watcher-kuttl-default/watcher1ec2-account-delete-s22r4" Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.637088 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dccb3c67-b2f2-49b0-9713-d45599d2ca09-operator-scripts\") pod \"watcher1ec2-account-delete-s22r4\" (UID: \"dccb3c67-b2f2-49b0-9713-d45599d2ca09\") " pod="watcher-kuttl-default/watcher1ec2-account-delete-s22r4" Feb 17 09:02:03 crc kubenswrapper[4813]: E0217 09:02:03.637818 4813 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:02:03 crc kubenswrapper[4813]: E0217 09:02:03.637954 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-config-data podName:fe254b04-57c7-42af-a1e3-c3a36a610fc2 nodeName:}" failed. No retries permitted until 2026-02-17 09:02:04.137936056 +0000 UTC m=+1271.798697279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "fe254b04-57c7-42af-a1e3-c3a36a610fc2") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:02:03 crc kubenswrapper[4813]: E0217 09:02:03.689411 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf95c184ba713ef160c7c1f84e0369e094e10edf34680594c61c9f609d27b3e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:02:03 crc kubenswrapper[4813]: E0217 09:02:03.693824 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf95c184ba713ef160c7c1f84e0369e094e10edf34680594c61c9f609d27b3e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:02:03 crc kubenswrapper[4813]: E0217 09:02:03.699490 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bf95c184ba713ef160c7c1f84e0369e094e10edf34680594c61c9f609d27b3e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:02:03 crc kubenswrapper[4813]: E0217 09:02:03.699561 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="7a42faa8-3979-4ca1-8fba-b213170a2e7b" containerName="watcher-applier" Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.738031 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg44n\" (UniqueName: \"kubernetes.io/projected/dccb3c67-b2f2-49b0-9713-d45599d2ca09-kube-api-access-dg44n\") pod \"watcher1ec2-account-delete-s22r4\" (UID: \"dccb3c67-b2f2-49b0-9713-d45599d2ca09\") " pod="watcher-kuttl-default/watcher1ec2-account-delete-s22r4" Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.738092 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dccb3c67-b2f2-49b0-9713-d45599d2ca09-operator-scripts\") pod \"watcher1ec2-account-delete-s22r4\" (UID: \"dccb3c67-b2f2-49b0-9713-d45599d2ca09\") " pod="watcher-kuttl-default/watcher1ec2-account-delete-s22r4" Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.738786 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dccb3c67-b2f2-49b0-9713-d45599d2ca09-operator-scripts\") pod \"watcher1ec2-account-delete-s22r4\" (UID: \"dccb3c67-b2f2-49b0-9713-d45599d2ca09\") " pod="watcher-kuttl-default/watcher1ec2-account-delete-s22r4" Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.759817 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg44n\" (UniqueName: \"kubernetes.io/projected/dccb3c67-b2f2-49b0-9713-d45599d2ca09-kube-api-access-dg44n\") pod \"watcher1ec2-account-delete-s22r4\" (UID: \"dccb3c67-b2f2-49b0-9713-d45599d2ca09\") " pod="watcher-kuttl-default/watcher1ec2-account-delete-s22r4" Feb 17 09:02:03 crc kubenswrapper[4813]: I0217 09:02:03.785231 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher1ec2-account-delete-s22r4" Feb 17 09:02:04 crc kubenswrapper[4813]: E0217 09:02:04.145289 4813 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:02:04 crc kubenswrapper[4813]: E0217 09:02:04.145377 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-config-data podName:fe254b04-57c7-42af-a1e3-c3a36a610fc2 nodeName:}" failed. No retries permitted until 2026-02-17 09:02:05.145357536 +0000 UTC m=+1272.806118759 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "fe254b04-57c7-42af-a1e3-c3a36a610fc2") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:02:04 crc kubenswrapper[4813]: I0217 09:02:04.283705 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher1ec2-account-delete-s22r4"] Feb 17 09:02:04 crc kubenswrapper[4813]: I0217 09:02:04.463988 4813 generic.go:334] "Generic (PLEG): container finished" podID="270f066c-48f3-4f57-a70b-bf22df03c035" containerID="332feb24d7d577336954110e08a5690fef1bc14f1d9223bc596bdadcc86a9e59" exitCode=0 Feb 17 09:02:04 crc kubenswrapper[4813]: I0217 09:02:04.464017 4813 generic.go:334] "Generic (PLEG): container finished" podID="270f066c-48f3-4f57-a70b-bf22df03c035" containerID="3f5c61850e22ae42c2cf0741a226608c2b6424231d91dbb1d24f0607dc75c081" exitCode=143 Feb 17 09:02:04 crc kubenswrapper[4813]: I0217 09:02:04.464087 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"270f066c-48f3-4f57-a70b-bf22df03c035","Type":"ContainerDied","Data":"332feb24d7d577336954110e08a5690fef1bc14f1d9223bc596bdadcc86a9e59"} Feb 17 09:02:04 crc kubenswrapper[4813]: I0217 09:02:04.464149 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"270f066c-48f3-4f57-a70b-bf22df03c035","Type":"ContainerDied","Data":"3f5c61850e22ae42c2cf0741a226608c2b6424231d91dbb1d24f0607dc75c081"} Feb 17 09:02:04 crc kubenswrapper[4813]: I0217 09:02:04.465112 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="fe254b04-57c7-42af-a1e3-c3a36a610fc2" containerName="watcher-decision-engine" containerID="cri-o://b9d42836cd13de4c7363fd4eb091d772860977bb0c0522f8f8da893e78f6f519" gracePeriod=30 Feb 17 09:02:04 crc kubenswrapper[4813]: I0217 09:02:04.466166 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher1ec2-account-delete-s22r4" event={"ID":"dccb3c67-b2f2-49b0-9713-d45599d2ca09","Type":"ContainerStarted","Data":"d1bb798cee9e9a39c1c2f5fd9ee74b9fc9fbb7a1ce9305fa88d48b96be7dde27"} Feb 17 09:02:04 crc kubenswrapper[4813]: I0217 09:02:04.636733 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="270f066c-48f3-4f57-a70b-bf22df03c035" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.140:9322/\": dial tcp 10.217.0.140:9322: connect: connection refused" Feb 17 09:02:04 crc kubenswrapper[4813]: I0217 09:02:04.636749 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="270f066c-48f3-4f57-a70b-bf22df03c035" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.140:9322/\": dial tcp 10.217.0.140:9322: connect: connection refused" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.127384 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc013ac4-e8b4-4f10-991c-bd08de8bc164" path="/var/lib/kubelet/pods/cc013ac4-e8b4-4f10-991c-bd08de8bc164/volumes" Feb 17 09:02:05 crc kubenswrapper[4813]: E0217 09:02:05.161768 4813 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:02:05 crc kubenswrapper[4813]: E0217 09:02:05.161837 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-config-data podName:fe254b04-57c7-42af-a1e3-c3a36a610fc2 nodeName:}" failed. No retries permitted until 2026-02-17 09:02:07.161815832 +0000 UTC m=+1274.822577055 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "fe254b04-57c7-42af-a1e3-c3a36a610fc2") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.166743 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.166787 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.250999 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.365196 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/270f066c-48f3-4f57-a70b-bf22df03c035-logs\") pod \"270f066c-48f3-4f57-a70b-bf22df03c035\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.365246 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s8xr\" (UniqueName: \"kubernetes.io/projected/270f066c-48f3-4f57-a70b-bf22df03c035-kube-api-access-8s8xr\") pod \"270f066c-48f3-4f57-a70b-bf22df03c035\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.365284 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-custom-prometheus-ca\") pod \"270f066c-48f3-4f57-a70b-bf22df03c035\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.365408 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-config-data\") pod \"270f066c-48f3-4f57-a70b-bf22df03c035\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.365444 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-combined-ca-bundle\") pod \"270f066c-48f3-4f57-a70b-bf22df03c035\" (UID: \"270f066c-48f3-4f57-a70b-bf22df03c035\") " Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.366532 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/270f066c-48f3-4f57-a70b-bf22df03c035-logs" (OuterVolumeSpecName: "logs") pod "270f066c-48f3-4f57-a70b-bf22df03c035" (UID: "270f066c-48f3-4f57-a70b-bf22df03c035"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.385156 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/270f066c-48f3-4f57-a70b-bf22df03c035-kube-api-access-8s8xr" (OuterVolumeSpecName: "kube-api-access-8s8xr") pod "270f066c-48f3-4f57-a70b-bf22df03c035" (UID: "270f066c-48f3-4f57-a70b-bf22df03c035"). InnerVolumeSpecName "kube-api-access-8s8xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.403529 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "270f066c-48f3-4f57-a70b-bf22df03c035" (UID: "270f066c-48f3-4f57-a70b-bf22df03c035"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.417590 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-config-data" (OuterVolumeSpecName: "config-data") pod "270f066c-48f3-4f57-a70b-bf22df03c035" (UID: "270f066c-48f3-4f57-a70b-bf22df03c035"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.421182 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "270f066c-48f3-4f57-a70b-bf22df03c035" (UID: "270f066c-48f3-4f57-a70b-bf22df03c035"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.467879 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.467928 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.467941 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/270f066c-48f3-4f57-a70b-bf22df03c035-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.467952 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s8xr\" (UniqueName: \"kubernetes.io/projected/270f066c-48f3-4f57-a70b-bf22df03c035-kube-api-access-8s8xr\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.467964 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/270f066c-48f3-4f57-a70b-bf22df03c035-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.478620 4813 generic.go:334] "Generic (PLEG): container finished" podID="dccb3c67-b2f2-49b0-9713-d45599d2ca09" containerID="fd106661bd624754f10345ddc8c6c2374aa13b0d887a76b9159c9c361ff5097a" exitCode=0 Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.478708 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher1ec2-account-delete-s22r4" event={"ID":"dccb3c67-b2f2-49b0-9713-d45599d2ca09","Type":"ContainerDied","Data":"fd106661bd624754f10345ddc8c6c2374aa13b0d887a76b9159c9c361ff5097a"} Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.480245 4813 generic.go:334] "Generic (PLEG): container finished" podID="7a42faa8-3979-4ca1-8fba-b213170a2e7b" containerID="bf95c184ba713ef160c7c1f84e0369e094e10edf34680594c61c9f609d27b3e4" exitCode=0 Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.480371 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"7a42faa8-3979-4ca1-8fba-b213170a2e7b","Type":"ContainerDied","Data":"bf95c184ba713ef160c7c1f84e0369e094e10edf34680594c61c9f609d27b3e4"} Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.482433 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"270f066c-48f3-4f57-a70b-bf22df03c035","Type":"ContainerDied","Data":"4cb5a67523039dbe7c5590b58ad600d4d32d7883bb5c105cb5c1f9c3991b69b5"} Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.482472 4813 scope.go:117] "RemoveContainer" containerID="332feb24d7d577336954110e08a5690fef1bc14f1d9223bc596bdadcc86a9e59" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.482490 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.521081 4813 scope.go:117] "RemoveContainer" containerID="3f5c61850e22ae42c2cf0741a226608c2b6424231d91dbb1d24f0607dc75c081" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.522013 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.529012 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.785689 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.936399 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.936668 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="79160a99-2d68-4803-913f-f2a0345fe683" containerName="ceilometer-central-agent" containerID="cri-o://ec0177b697b1b71ba39a116782e76f2f0abcf8212b336af107f33ffc5b82fcd2" gracePeriod=30 Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.936778 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="79160a99-2d68-4803-913f-f2a0345fe683" containerName="ceilometer-notification-agent" containerID="cri-o://9f5c5bcb5cfda471a7106638cbb797c5657ef028fcdfe94cd727997aef64b1b4" gracePeriod=30 Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.936762 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="79160a99-2d68-4803-913f-f2a0345fe683" containerName="sg-core" containerID="cri-o://d91150263ed825ebfbc9014005e8a7083a6ef2f5678306fbdfb0de2a1a715c32" gracePeriod=30 Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.937038 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="79160a99-2d68-4803-913f-f2a0345fe683" containerName="proxy-httpd" containerID="cri-o://3f3b36dfb54df4fad7fbbc64ba809e0f87fa855ce57ac1d0b31e4a890a11409d" gracePeriod=30 Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.975819 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bbz4\" (UniqueName: \"kubernetes.io/projected/7a42faa8-3979-4ca1-8fba-b213170a2e7b-kube-api-access-9bbz4\") pod \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\" (UID: \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\") " Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.975859 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a42faa8-3979-4ca1-8fba-b213170a2e7b-config-data\") pod \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\" (UID: \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\") " Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.975912 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a42faa8-3979-4ca1-8fba-b213170a2e7b-logs\") pod \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\" (UID: \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\") " Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.975929 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a42faa8-3979-4ca1-8fba-b213170a2e7b-combined-ca-bundle\") pod \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\" (UID: \"7a42faa8-3979-4ca1-8fba-b213170a2e7b\") " Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.977005 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a42faa8-3979-4ca1-8fba-b213170a2e7b-logs" (OuterVolumeSpecName: "logs") pod "7a42faa8-3979-4ca1-8fba-b213170a2e7b" (UID: "7a42faa8-3979-4ca1-8fba-b213170a2e7b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:02:05 crc kubenswrapper[4813]: I0217 09:02:05.982020 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a42faa8-3979-4ca1-8fba-b213170a2e7b-kube-api-access-9bbz4" (OuterVolumeSpecName: "kube-api-access-9bbz4") pod "7a42faa8-3979-4ca1-8fba-b213170a2e7b" (UID: "7a42faa8-3979-4ca1-8fba-b213170a2e7b"). InnerVolumeSpecName "kube-api-access-9bbz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.005714 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a42faa8-3979-4ca1-8fba-b213170a2e7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a42faa8-3979-4ca1-8fba-b213170a2e7b" (UID: "7a42faa8-3979-4ca1-8fba-b213170a2e7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.034892 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a42faa8-3979-4ca1-8fba-b213170a2e7b-config-data" (OuterVolumeSpecName: "config-data") pod "7a42faa8-3979-4ca1-8fba-b213170a2e7b" (UID: "7a42faa8-3979-4ca1-8fba-b213170a2e7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.077475 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a42faa8-3979-4ca1-8fba-b213170a2e7b-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.077505 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a42faa8-3979-4ca1-8fba-b213170a2e7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.077514 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bbz4\" (UniqueName: \"kubernetes.io/projected/7a42faa8-3979-4ca1-8fba-b213170a2e7b-kube-api-access-9bbz4\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.077523 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a42faa8-3979-4ca1-8fba-b213170a2e7b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.492685 4813 generic.go:334] "Generic (PLEG): container finished" podID="79160a99-2d68-4803-913f-f2a0345fe683" containerID="3f3b36dfb54df4fad7fbbc64ba809e0f87fa855ce57ac1d0b31e4a890a11409d" exitCode=0 Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.492720 4813 generic.go:334] "Generic (PLEG): container finished" podID="79160a99-2d68-4803-913f-f2a0345fe683" containerID="d91150263ed825ebfbc9014005e8a7083a6ef2f5678306fbdfb0de2a1a715c32" exitCode=2 Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.492734 4813 generic.go:334] "Generic (PLEG): container finished" podID="79160a99-2d68-4803-913f-f2a0345fe683" containerID="ec0177b697b1b71ba39a116782e76f2f0abcf8212b336af107f33ffc5b82fcd2" exitCode=0 Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.492807 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79160a99-2d68-4803-913f-f2a0345fe683","Type":"ContainerDied","Data":"3f3b36dfb54df4fad7fbbc64ba809e0f87fa855ce57ac1d0b31e4a890a11409d"} Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.492845 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79160a99-2d68-4803-913f-f2a0345fe683","Type":"ContainerDied","Data":"d91150263ed825ebfbc9014005e8a7083a6ef2f5678306fbdfb0de2a1a715c32"} Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.492859 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79160a99-2d68-4803-913f-f2a0345fe683","Type":"ContainerDied","Data":"ec0177b697b1b71ba39a116782e76f2f0abcf8212b336af107f33ffc5b82fcd2"} Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.494480 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"7a42faa8-3979-4ca1-8fba-b213170a2e7b","Type":"ContainerDied","Data":"1e70ed7e71ce71542fb773033df9fe482f4b7d59f1ca252fd5c5d97cf5821cad"} Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.494536 4813 scope.go:117] "RemoveContainer" containerID="bf95c184ba713ef160c7c1f84e0369e094e10edf34680594c61c9f609d27b3e4" Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.494501 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.532937 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.539774 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.854992 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher1ec2-account-delete-s22r4" Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.931000 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.990628 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dccb3c67-b2f2-49b0-9713-d45599d2ca09-operator-scripts\") pod \"dccb3c67-b2f2-49b0-9713-d45599d2ca09\" (UID: \"dccb3c67-b2f2-49b0-9713-d45599d2ca09\") " Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.990760 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg44n\" (UniqueName: \"kubernetes.io/projected/dccb3c67-b2f2-49b0-9713-d45599d2ca09-kube-api-access-dg44n\") pod \"dccb3c67-b2f2-49b0-9713-d45599d2ca09\" (UID: \"dccb3c67-b2f2-49b0-9713-d45599d2ca09\") " Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.992026 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dccb3c67-b2f2-49b0-9713-d45599d2ca09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dccb3c67-b2f2-49b0-9713-d45599d2ca09" (UID: "dccb3c67-b2f2-49b0-9713-d45599d2ca09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:02:06 crc kubenswrapper[4813]: I0217 09:02:06.994739 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dccb3c67-b2f2-49b0-9713-d45599d2ca09-kube-api-access-dg44n" (OuterVolumeSpecName: "kube-api-access-dg44n") pod "dccb3c67-b2f2-49b0-9713-d45599d2ca09" (UID: "dccb3c67-b2f2-49b0-9713-d45599d2ca09"). InnerVolumeSpecName "kube-api-access-dg44n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.091748 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-scripts\") pod \"79160a99-2d68-4803-913f-f2a0345fe683\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.092693 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-sg-core-conf-yaml\") pod \"79160a99-2d68-4803-913f-f2a0345fe683\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.092777 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-config-data\") pod \"79160a99-2d68-4803-913f-f2a0345fe683\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.092875 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-ceilometer-tls-certs\") pod \"79160a99-2d68-4803-913f-f2a0345fe683\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.093008 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79160a99-2d68-4803-913f-f2a0345fe683-log-httpd\") pod \"79160a99-2d68-4803-913f-f2a0345fe683\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.093117 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79160a99-2d68-4803-913f-f2a0345fe683-run-httpd\") pod \"79160a99-2d68-4803-913f-f2a0345fe683\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.093411 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47s2k\" (UniqueName: \"kubernetes.io/projected/79160a99-2d68-4803-913f-f2a0345fe683-kube-api-access-47s2k\") pod \"79160a99-2d68-4803-913f-f2a0345fe683\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.093523 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-combined-ca-bundle\") pod \"79160a99-2d68-4803-913f-f2a0345fe683\" (UID: \"79160a99-2d68-4803-913f-f2a0345fe683\") " Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.093362 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79160a99-2d68-4803-913f-f2a0345fe683-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "79160a99-2d68-4803-913f-f2a0345fe683" (UID: "79160a99-2d68-4803-913f-f2a0345fe683"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.093382 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79160a99-2d68-4803-913f-f2a0345fe683-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "79160a99-2d68-4803-913f-f2a0345fe683" (UID: "79160a99-2d68-4803-913f-f2a0345fe683"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.094337 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg44n\" (UniqueName: \"kubernetes.io/projected/dccb3c67-b2f2-49b0-9713-d45599d2ca09-kube-api-access-dg44n\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.094401 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79160a99-2d68-4803-913f-f2a0345fe683-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.094457 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79160a99-2d68-4803-913f-f2a0345fe683-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.094554 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dccb3c67-b2f2-49b0-9713-d45599d2ca09-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.096514 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-scripts" (OuterVolumeSpecName: "scripts") pod "79160a99-2d68-4803-913f-f2a0345fe683" (UID: "79160a99-2d68-4803-913f-f2a0345fe683"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.096543 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79160a99-2d68-4803-913f-f2a0345fe683-kube-api-access-47s2k" (OuterVolumeSpecName: "kube-api-access-47s2k") pod "79160a99-2d68-4803-913f-f2a0345fe683" (UID: "79160a99-2d68-4803-913f-f2a0345fe683"). InnerVolumeSpecName "kube-api-access-47s2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.116394 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "79160a99-2d68-4803-913f-f2a0345fe683" (UID: "79160a99-2d68-4803-913f-f2a0345fe683"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.133640 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="270f066c-48f3-4f57-a70b-bf22df03c035" path="/var/lib/kubelet/pods/270f066c-48f3-4f57-a70b-bf22df03c035/volumes" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.134240 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a42faa8-3979-4ca1-8fba-b213170a2e7b" path="/var/lib/kubelet/pods/7a42faa8-3979-4ca1-8fba-b213170a2e7b/volumes" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.158700 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "79160a99-2d68-4803-913f-f2a0345fe683" (UID: "79160a99-2d68-4803-913f-f2a0345fe683"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.168980 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79160a99-2d68-4803-913f-f2a0345fe683" (UID: "79160a99-2d68-4803-913f-f2a0345fe683"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.189293 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-config-data" (OuterVolumeSpecName: "config-data") pod "79160a99-2d68-4803-913f-f2a0345fe683" (UID: "79160a99-2d68-4803-913f-f2a0345fe683"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.196330 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47s2k\" (UniqueName: \"kubernetes.io/projected/79160a99-2d68-4803-913f-f2a0345fe683-kube-api-access-47s2k\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.196358 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.196367 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.196377 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.196385 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.196394 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/79160a99-2d68-4803-913f-f2a0345fe683-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:07 crc kubenswrapper[4813]: E0217 09:02:07.196470 4813 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:02:07 crc kubenswrapper[4813]: E0217 09:02:07.196518 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-config-data podName:fe254b04-57c7-42af-a1e3-c3a36a610fc2 nodeName:}" failed. No retries permitted until 2026-02-17 09:02:11.196501537 +0000 UTC m=+1278.857262760 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "fe254b04-57c7-42af-a1e3-c3a36a610fc2") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.504899 4813 generic.go:334] "Generic (PLEG): container finished" podID="79160a99-2d68-4803-913f-f2a0345fe683" containerID="9f5c5bcb5cfda471a7106638cbb797c5657ef028fcdfe94cd727997aef64b1b4" exitCode=0 Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.504987 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.504985 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79160a99-2d68-4803-913f-f2a0345fe683","Type":"ContainerDied","Data":"9f5c5bcb5cfda471a7106638cbb797c5657ef028fcdfe94cd727997aef64b1b4"} Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.506207 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79160a99-2d68-4803-913f-f2a0345fe683","Type":"ContainerDied","Data":"f48e4f3a824887699215d5b3c5500931139eebeda2e784120bc4df54fb782281"} Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.506243 4813 scope.go:117] "RemoveContainer" containerID="3f3b36dfb54df4fad7fbbc64ba809e0f87fa855ce57ac1d0b31e4a890a11409d" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.507889 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher1ec2-account-delete-s22r4" event={"ID":"dccb3c67-b2f2-49b0-9713-d45599d2ca09","Type":"ContainerDied","Data":"d1bb798cee9e9a39c1c2f5fd9ee74b9fc9fbb7a1ce9305fa88d48b96be7dde27"} Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.507987 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1bb798cee9e9a39c1c2f5fd9ee74b9fc9fbb7a1ce9305fa88d48b96be7dde27" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.508090 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher1ec2-account-delete-s22r4" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.523769 4813 scope.go:117] "RemoveContainer" containerID="d91150263ed825ebfbc9014005e8a7083a6ef2f5678306fbdfb0de2a1a715c32" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.543073 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.543358 4813 scope.go:117] "RemoveContainer" containerID="9f5c5bcb5cfda471a7106638cbb797c5657ef028fcdfe94cd727997aef64b1b4" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.550357 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.566860 4813 scope.go:117] "RemoveContainer" containerID="ec0177b697b1b71ba39a116782e76f2f0abcf8212b336af107f33ffc5b82fcd2" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.571496 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:02:07 crc kubenswrapper[4813]: E0217 09:02:07.571929 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79160a99-2d68-4803-913f-f2a0345fe683" containerName="sg-core" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.571951 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="79160a99-2d68-4803-913f-f2a0345fe683" containerName="sg-core" Feb 17 09:02:07 crc kubenswrapper[4813]: E0217 09:02:07.571967 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270f066c-48f3-4f57-a70b-bf22df03c035" containerName="watcher-kuttl-api-log" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.571973 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="270f066c-48f3-4f57-a70b-bf22df03c035" containerName="watcher-kuttl-api-log" Feb 17 09:02:07 crc kubenswrapper[4813]: E0217 09:02:07.571987 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79160a99-2d68-4803-913f-f2a0345fe683" containerName="ceilometer-notification-agent" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.571992 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="79160a99-2d68-4803-913f-f2a0345fe683" containerName="ceilometer-notification-agent" Feb 17 09:02:07 crc kubenswrapper[4813]: E0217 09:02:07.572003 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a42faa8-3979-4ca1-8fba-b213170a2e7b" containerName="watcher-applier" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.572009 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a42faa8-3979-4ca1-8fba-b213170a2e7b" containerName="watcher-applier" Feb 17 09:02:07 crc kubenswrapper[4813]: E0217 09:02:07.572022 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79160a99-2d68-4803-913f-f2a0345fe683" containerName="ceilometer-central-agent" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.572028 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="79160a99-2d68-4803-913f-f2a0345fe683" containerName="ceilometer-central-agent" Feb 17 09:02:07 crc kubenswrapper[4813]: E0217 09:02:07.572037 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccb3c67-b2f2-49b0-9713-d45599d2ca09" containerName="mariadb-account-delete" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.572043 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccb3c67-b2f2-49b0-9713-d45599d2ca09" containerName="mariadb-account-delete" Feb 17 09:02:07 crc kubenswrapper[4813]: E0217 09:02:07.572054 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="270f066c-48f3-4f57-a70b-bf22df03c035" containerName="watcher-api" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.572060 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="270f066c-48f3-4f57-a70b-bf22df03c035" containerName="watcher-api" Feb 17 09:02:07 crc kubenswrapper[4813]: E0217 09:02:07.572067 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79160a99-2d68-4803-913f-f2a0345fe683" containerName="proxy-httpd" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.572072 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="79160a99-2d68-4803-913f-f2a0345fe683" containerName="proxy-httpd" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.572217 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="79160a99-2d68-4803-913f-f2a0345fe683" containerName="ceilometer-notification-agent" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.572230 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="79160a99-2d68-4803-913f-f2a0345fe683" containerName="proxy-httpd" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.572238 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a42faa8-3979-4ca1-8fba-b213170a2e7b" containerName="watcher-applier" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.572255 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="dccb3c67-b2f2-49b0-9713-d45599d2ca09" containerName="mariadb-account-delete" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.572264 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="270f066c-48f3-4f57-a70b-bf22df03c035" containerName="watcher-kuttl-api-log" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.572277 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="79160a99-2d68-4803-913f-f2a0345fe683" containerName="sg-core" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.572287 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="79160a99-2d68-4803-913f-f2a0345fe683" containerName="ceilometer-central-agent" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.572297 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="270f066c-48f3-4f57-a70b-bf22df03c035" containerName="watcher-api" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.573976 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.576467 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.576512 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.581007 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.586581 4813 scope.go:117] "RemoveContainer" containerID="3f3b36dfb54df4fad7fbbc64ba809e0f87fa855ce57ac1d0b31e4a890a11409d" Feb 17 09:02:07 crc kubenswrapper[4813]: E0217 09:02:07.589574 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f3b36dfb54df4fad7fbbc64ba809e0f87fa855ce57ac1d0b31e4a890a11409d\": container with ID starting with 3f3b36dfb54df4fad7fbbc64ba809e0f87fa855ce57ac1d0b31e4a890a11409d not found: ID does not exist" containerID="3f3b36dfb54df4fad7fbbc64ba809e0f87fa855ce57ac1d0b31e4a890a11409d" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.589678 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f3b36dfb54df4fad7fbbc64ba809e0f87fa855ce57ac1d0b31e4a890a11409d"} err="failed to get container status \"3f3b36dfb54df4fad7fbbc64ba809e0f87fa855ce57ac1d0b31e4a890a11409d\": rpc error: code = NotFound desc = could not find container \"3f3b36dfb54df4fad7fbbc64ba809e0f87fa855ce57ac1d0b31e4a890a11409d\": container with ID starting with 3f3b36dfb54df4fad7fbbc64ba809e0f87fa855ce57ac1d0b31e4a890a11409d not found: ID does not exist" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.589716 4813 scope.go:117] "RemoveContainer" containerID="d91150263ed825ebfbc9014005e8a7083a6ef2f5678306fbdfb0de2a1a715c32" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.589719 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:02:07 crc kubenswrapper[4813]: E0217 09:02:07.590433 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d91150263ed825ebfbc9014005e8a7083a6ef2f5678306fbdfb0de2a1a715c32\": container with ID starting with d91150263ed825ebfbc9014005e8a7083a6ef2f5678306fbdfb0de2a1a715c32 not found: ID does not exist" containerID="d91150263ed825ebfbc9014005e8a7083a6ef2f5678306fbdfb0de2a1a715c32" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.590522 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d91150263ed825ebfbc9014005e8a7083a6ef2f5678306fbdfb0de2a1a715c32"} err="failed to get container status \"d91150263ed825ebfbc9014005e8a7083a6ef2f5678306fbdfb0de2a1a715c32\": rpc error: code = NotFound desc = could not find container \"d91150263ed825ebfbc9014005e8a7083a6ef2f5678306fbdfb0de2a1a715c32\": container with ID starting with d91150263ed825ebfbc9014005e8a7083a6ef2f5678306fbdfb0de2a1a715c32 not found: ID does not exist" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.590547 4813 scope.go:117] "RemoveContainer" containerID="9f5c5bcb5cfda471a7106638cbb797c5657ef028fcdfe94cd727997aef64b1b4" Feb 17 09:02:07 crc kubenswrapper[4813]: E0217 09:02:07.590837 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f5c5bcb5cfda471a7106638cbb797c5657ef028fcdfe94cd727997aef64b1b4\": container with ID starting with 9f5c5bcb5cfda471a7106638cbb797c5657ef028fcdfe94cd727997aef64b1b4 not found: ID does not exist" containerID="9f5c5bcb5cfda471a7106638cbb797c5657ef028fcdfe94cd727997aef64b1b4" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.590870 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f5c5bcb5cfda471a7106638cbb797c5657ef028fcdfe94cd727997aef64b1b4"} err="failed to get container status \"9f5c5bcb5cfda471a7106638cbb797c5657ef028fcdfe94cd727997aef64b1b4\": rpc error: code = NotFound desc = could not find container \"9f5c5bcb5cfda471a7106638cbb797c5657ef028fcdfe94cd727997aef64b1b4\": container with ID starting with 9f5c5bcb5cfda471a7106638cbb797c5657ef028fcdfe94cd727997aef64b1b4 not found: ID does not exist" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.590891 4813 scope.go:117] "RemoveContainer" containerID="ec0177b697b1b71ba39a116782e76f2f0abcf8212b336af107f33ffc5b82fcd2" Feb 17 09:02:07 crc kubenswrapper[4813]: E0217 09:02:07.591228 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec0177b697b1b71ba39a116782e76f2f0abcf8212b336af107f33ffc5b82fcd2\": container with ID starting with ec0177b697b1b71ba39a116782e76f2f0abcf8212b336af107f33ffc5b82fcd2 not found: ID does not exist" containerID="ec0177b697b1b71ba39a116782e76f2f0abcf8212b336af107f33ffc5b82fcd2" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.591254 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec0177b697b1b71ba39a116782e76f2f0abcf8212b336af107f33ffc5b82fcd2"} err="failed to get container status \"ec0177b697b1b71ba39a116782e76f2f0abcf8212b336af107f33ffc5b82fcd2\": rpc error: code = NotFound desc = could not find container \"ec0177b697b1b71ba39a116782e76f2f0abcf8212b336af107f33ffc5b82fcd2\": container with ID starting with ec0177b697b1b71ba39a116782e76f2f0abcf8212b336af107f33ffc5b82fcd2 not found: ID does not exist" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.703680 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-run-httpd\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.703736 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.703802 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.703825 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-scripts\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.703865 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-config-data\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.703899 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z5qs\" (UniqueName: \"kubernetes.io/projected/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-kube-api-access-5z5qs\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.703957 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.703978 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-log-httpd\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.805058 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-run-httpd\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.805098 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.805154 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.805173 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-scripts\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.805288 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-config-data\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.805670 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-run-httpd\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.805331 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z5qs\" (UniqueName: \"kubernetes.io/projected/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-kube-api-access-5z5qs\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.805889 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.805906 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-log-httpd\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.806153 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-log-httpd\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.809944 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.809962 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.810546 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.810628 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-scripts\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.814588 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-config-data\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.819070 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z5qs\" (UniqueName: \"kubernetes.io/projected/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-kube-api-access-5z5qs\") pod \"ceilometer-0\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:07 crc kubenswrapper[4813]: I0217 09:02:07.890874 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.392427 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:02:08 crc kubenswrapper[4813]: W0217 09:02:08.427647 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5408fa96_9ac7_4cf4_9348_7151a1e27ae5.slice/crio-98c724c7dfa41c2867e927c0c071ebdc3823f594998c96dd2f8de72a56dcbf3d WatchSource:0}: Error finding container 98c724c7dfa41c2867e927c0c071ebdc3823f594998c96dd2f8de72a56dcbf3d: Status 404 returned error can't find the container with id 98c724c7dfa41c2867e927c0c071ebdc3823f594998c96dd2f8de72a56dcbf3d Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.515371 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg"] Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.537934 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher1ec2-account-delete-s22r4"] Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.544298 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-1ec2-account-create-update-xw8jg"] Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.553711 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5408fa96-9ac7-4cf4-9348-7151a1e27ae5","Type":"ContainerStarted","Data":"98c724c7dfa41c2867e927c0c071ebdc3823f594998c96dd2f8de72a56dcbf3d"} Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.555092 4813 generic.go:334] "Generic (PLEG): container finished" podID="fe254b04-57c7-42af-a1e3-c3a36a610fc2" containerID="b9d42836cd13de4c7363fd4eb091d772860977bb0c0522f8f8da893e78f6f519" exitCode=0 Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.555121 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"fe254b04-57c7-42af-a1e3-c3a36a610fc2","Type":"ContainerDied","Data":"b9d42836cd13de4c7363fd4eb091d772860977bb0c0522f8f8da893e78f6f519"} Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.555466 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher1ec2-account-delete-s22r4"] Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.570261 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-nlj76"] Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.576971 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.578962 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-nlj76"] Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.684886 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-s6zhs"] Feb 17 09:02:08 crc kubenswrapper[4813]: E0217 09:02:08.685476 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe254b04-57c7-42af-a1e3-c3a36a610fc2" containerName="watcher-decision-engine" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.685495 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe254b04-57c7-42af-a1e3-c3a36a610fc2" containerName="watcher-decision-engine" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.685657 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe254b04-57c7-42af-a1e3-c3a36a610fc2" containerName="watcher-decision-engine" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.686352 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-s6zhs" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.699681 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx"] Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.700679 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.703554 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.708775 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-s6zhs"] Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.718377 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx"] Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.724392 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe254b04-57c7-42af-a1e3-c3a36a610fc2-logs\") pod \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.724463 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-combined-ca-bundle\") pod \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.724498 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8tjv\" (UniqueName: \"kubernetes.io/projected/fe254b04-57c7-42af-a1e3-c3a36a610fc2-kube-api-access-n8tjv\") pod \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.724521 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-custom-prometheus-ca\") pod \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.724565 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-config-data\") pod \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\" (UID: \"fe254b04-57c7-42af-a1e3-c3a36a610fc2\") " Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.725214 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe254b04-57c7-42af-a1e3-c3a36a610fc2-logs" (OuterVolumeSpecName: "logs") pod "fe254b04-57c7-42af-a1e3-c3a36a610fc2" (UID: "fe254b04-57c7-42af-a1e3-c3a36a610fc2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.735650 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe254b04-57c7-42af-a1e3-c3a36a610fc2-kube-api-access-n8tjv" (OuterVolumeSpecName: "kube-api-access-n8tjv") pod "fe254b04-57c7-42af-a1e3-c3a36a610fc2" (UID: "fe254b04-57c7-42af-a1e3-c3a36a610fc2"). InnerVolumeSpecName "kube-api-access-n8tjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.753192 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "fe254b04-57c7-42af-a1e3-c3a36a610fc2" (UID: "fe254b04-57c7-42af-a1e3-c3a36a610fc2"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.770805 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe254b04-57c7-42af-a1e3-c3a36a610fc2" (UID: "fe254b04-57c7-42af-a1e3-c3a36a610fc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.773490 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-config-data" (OuterVolumeSpecName: "config-data") pod "fe254b04-57c7-42af-a1e3-c3a36a610fc2" (UID: "fe254b04-57c7-42af-a1e3-c3a36a610fc2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.826649 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e12e1212-e6b3-4a1f-bd01-9983df669f5c-operator-scripts\") pod \"watcher-25d0-account-create-update-vzlsx\" (UID: \"e12e1212-e6b3-4a1f-bd01-9983df669f5c\") " pod="watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.826698 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdgz2\" (UniqueName: \"kubernetes.io/projected/ea46d7cb-8914-4419-baa6-c7cbeefad3af-kube-api-access-zdgz2\") pod \"watcher-db-create-s6zhs\" (UID: \"ea46d7cb-8914-4419-baa6-c7cbeefad3af\") " pod="watcher-kuttl-default/watcher-db-create-s6zhs" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.826737 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea46d7cb-8914-4419-baa6-c7cbeefad3af-operator-scripts\") pod \"watcher-db-create-s6zhs\" (UID: \"ea46d7cb-8914-4419-baa6-c7cbeefad3af\") " pod="watcher-kuttl-default/watcher-db-create-s6zhs" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.826759 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfb9j\" (UniqueName: \"kubernetes.io/projected/e12e1212-e6b3-4a1f-bd01-9983df669f5c-kube-api-access-bfb9j\") pod \"watcher-25d0-account-create-update-vzlsx\" (UID: \"e12e1212-e6b3-4a1f-bd01-9983df669f5c\") " pod="watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.826857 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.826868 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8tjv\" (UniqueName: \"kubernetes.io/projected/fe254b04-57c7-42af-a1e3-c3a36a610fc2-kube-api-access-n8tjv\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.826879 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.826888 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe254b04-57c7-42af-a1e3-c3a36a610fc2-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.826897 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe254b04-57c7-42af-a1e3-c3a36a610fc2-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.928593 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea46d7cb-8914-4419-baa6-c7cbeefad3af-operator-scripts\") pod \"watcher-db-create-s6zhs\" (UID: \"ea46d7cb-8914-4419-baa6-c7cbeefad3af\") " pod="watcher-kuttl-default/watcher-db-create-s6zhs" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.928651 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfb9j\" (UniqueName: \"kubernetes.io/projected/e12e1212-e6b3-4a1f-bd01-9983df669f5c-kube-api-access-bfb9j\") pod \"watcher-25d0-account-create-update-vzlsx\" (UID: \"e12e1212-e6b3-4a1f-bd01-9983df669f5c\") " pod="watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.928777 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e12e1212-e6b3-4a1f-bd01-9983df669f5c-operator-scripts\") pod \"watcher-25d0-account-create-update-vzlsx\" (UID: \"e12e1212-e6b3-4a1f-bd01-9983df669f5c\") " pod="watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.928816 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdgz2\" (UniqueName: \"kubernetes.io/projected/ea46d7cb-8914-4419-baa6-c7cbeefad3af-kube-api-access-zdgz2\") pod \"watcher-db-create-s6zhs\" (UID: \"ea46d7cb-8914-4419-baa6-c7cbeefad3af\") " pod="watcher-kuttl-default/watcher-db-create-s6zhs" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.929992 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea46d7cb-8914-4419-baa6-c7cbeefad3af-operator-scripts\") pod \"watcher-db-create-s6zhs\" (UID: \"ea46d7cb-8914-4419-baa6-c7cbeefad3af\") " pod="watcher-kuttl-default/watcher-db-create-s6zhs" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.930635 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e12e1212-e6b3-4a1f-bd01-9983df669f5c-operator-scripts\") pod \"watcher-25d0-account-create-update-vzlsx\" (UID: \"e12e1212-e6b3-4a1f-bd01-9983df669f5c\") " pod="watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.945812 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdgz2\" (UniqueName: \"kubernetes.io/projected/ea46d7cb-8914-4419-baa6-c7cbeefad3af-kube-api-access-zdgz2\") pod \"watcher-db-create-s6zhs\" (UID: \"ea46d7cb-8914-4419-baa6-c7cbeefad3af\") " pod="watcher-kuttl-default/watcher-db-create-s6zhs" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.947243 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfb9j\" (UniqueName: \"kubernetes.io/projected/e12e1212-e6b3-4a1f-bd01-9983df669f5c-kube-api-access-bfb9j\") pod \"watcher-25d0-account-create-update-vzlsx\" (UID: \"e12e1212-e6b3-4a1f-bd01-9983df669f5c\") " pod="watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx" Feb 17 09:02:08 crc kubenswrapper[4813]: I0217 09:02:08.999147 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-s6zhs" Feb 17 09:02:09 crc kubenswrapper[4813]: I0217 09:02:09.019099 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx" Feb 17 09:02:09 crc kubenswrapper[4813]: I0217 09:02:09.222902 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79160a99-2d68-4803-913f-f2a0345fe683" path="/var/lib/kubelet/pods/79160a99-2d68-4803-913f-f2a0345fe683/volumes" Feb 17 09:02:09 crc kubenswrapper[4813]: I0217 09:02:09.224144 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b0dc5e-424c-4953-b273-62849bf61f9a" path="/var/lib/kubelet/pods/d4b0dc5e-424c-4953-b273-62849bf61f9a/volumes" Feb 17 09:02:09 crc kubenswrapper[4813]: I0217 09:02:09.226159 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dccb3c67-b2f2-49b0-9713-d45599d2ca09" path="/var/lib/kubelet/pods/dccb3c67-b2f2-49b0-9713-d45599d2ca09/volumes" Feb 17 09:02:09 crc kubenswrapper[4813]: I0217 09:02:09.226699 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff3f839c-ec07-474e-96a4-7e7e3623584d" path="/var/lib/kubelet/pods/ff3f839c-ec07-474e-96a4-7e7e3623584d/volumes" Feb 17 09:02:09 crc kubenswrapper[4813]: I0217 09:02:09.550411 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-s6zhs"] Feb 17 09:02:09 crc kubenswrapper[4813]: W0217 09:02:09.555466 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea46d7cb_8914_4419_baa6_c7cbeefad3af.slice/crio-f72572f075fdf6393a9e5d975acb80f514d0b6364004968279650e53cc8f6a34 WatchSource:0}: Error finding container f72572f075fdf6393a9e5d975acb80f514d0b6364004968279650e53cc8f6a34: Status 404 returned error can't find the container with id f72572f075fdf6393a9e5d975acb80f514d0b6364004968279650e53cc8f6a34 Feb 17 09:02:09 crc kubenswrapper[4813]: I0217 09:02:09.580698 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5408fa96-9ac7-4cf4-9348-7151a1e27ae5","Type":"ContainerStarted","Data":"d630fda29b1576c42ce6919ad26f12ba86db7c1fe65c7c6e2545abfb441934db"} Feb 17 09:02:09 crc kubenswrapper[4813]: I0217 09:02:09.587964 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-s6zhs" event={"ID":"ea46d7cb-8914-4419-baa6-c7cbeefad3af","Type":"ContainerStarted","Data":"f72572f075fdf6393a9e5d975acb80f514d0b6364004968279650e53cc8f6a34"} Feb 17 09:02:09 crc kubenswrapper[4813]: I0217 09:02:09.589103 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"fe254b04-57c7-42af-a1e3-c3a36a610fc2","Type":"ContainerDied","Data":"24d658edaa32d0aeea9b0b816be040ee92ce62e44a62dc4ee9501e74e89541d4"} Feb 17 09:02:09 crc kubenswrapper[4813]: I0217 09:02:09.589135 4813 scope.go:117] "RemoveContainer" containerID="b9d42836cd13de4c7363fd4eb091d772860977bb0c0522f8f8da893e78f6f519" Feb 17 09:02:09 crc kubenswrapper[4813]: I0217 09:02:09.589254 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:09 crc kubenswrapper[4813]: I0217 09:02:09.623505 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx"] Feb 17 09:02:09 crc kubenswrapper[4813]: I0217 09:02:09.713936 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:02:09 crc kubenswrapper[4813]: I0217 09:02:09.719661 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:02:10 crc kubenswrapper[4813]: I0217 09:02:10.613435 4813 generic.go:334] "Generic (PLEG): container finished" podID="e12e1212-e6b3-4a1f-bd01-9983df669f5c" containerID="3504c6795c1b6df62b83f757073879b3443b391862b0b9332bd99d8b38bb261c" exitCode=0 Feb 17 09:02:10 crc kubenswrapper[4813]: I0217 09:02:10.613747 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx" event={"ID":"e12e1212-e6b3-4a1f-bd01-9983df669f5c","Type":"ContainerDied","Data":"3504c6795c1b6df62b83f757073879b3443b391862b0b9332bd99d8b38bb261c"} Feb 17 09:02:10 crc kubenswrapper[4813]: I0217 09:02:10.613778 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx" event={"ID":"e12e1212-e6b3-4a1f-bd01-9983df669f5c","Type":"ContainerStarted","Data":"e3c900958f5638677cec909142b3ed856b171271fae9711ca610ea60d8927320"} Feb 17 09:02:10 crc kubenswrapper[4813]: I0217 09:02:10.618226 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5408fa96-9ac7-4cf4-9348-7151a1e27ae5","Type":"ContainerStarted","Data":"7139f89ee12027d1282389884c3917406ab267f0d5857f9f077df0a6a56e6fec"} Feb 17 09:02:10 crc kubenswrapper[4813]: I0217 09:02:10.618280 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5408fa96-9ac7-4cf4-9348-7151a1e27ae5","Type":"ContainerStarted","Data":"df923b084f4b92731e30f59aa6d5abb977bfc2b571b2d4720d2e7d8d72b2f8b2"} Feb 17 09:02:10 crc kubenswrapper[4813]: I0217 09:02:10.622531 4813 generic.go:334] "Generic (PLEG): container finished" podID="ea46d7cb-8914-4419-baa6-c7cbeefad3af" containerID="620961d7dfab08a80429352b9d1bc32c920166f4ee6488e2498ee81062095dcc" exitCode=0 Feb 17 09:02:10 crc kubenswrapper[4813]: I0217 09:02:10.622711 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-s6zhs" event={"ID":"ea46d7cb-8914-4419-baa6-c7cbeefad3af","Type":"ContainerDied","Data":"620961d7dfab08a80429352b9d1bc32c920166f4ee6488e2498ee81062095dcc"} Feb 17 09:02:11 crc kubenswrapper[4813]: I0217 09:02:11.120665 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe254b04-57c7-42af-a1e3-c3a36a610fc2" path="/var/lib/kubelet/pods/fe254b04-57c7-42af-a1e3-c3a36a610fc2/volumes" Feb 17 09:02:11 crc kubenswrapper[4813]: I0217 09:02:11.633630 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5408fa96-9ac7-4cf4-9348-7151a1e27ae5","Type":"ContainerStarted","Data":"9c651d69c19286c392cf0a16b8bc0ba002e45d6601ff0b130a848ad746e45e4d"} Feb 17 09:02:11 crc kubenswrapper[4813]: I0217 09:02:11.666681 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.782880512 podStartE2EDuration="4.666659851s" podCreationTimestamp="2026-02-17 09:02:07 +0000 UTC" firstStartedPulling="2026-02-17 09:02:08.433900851 +0000 UTC m=+1276.094662084" lastFinishedPulling="2026-02-17 09:02:11.3176802 +0000 UTC m=+1278.978441423" observedRunningTime="2026-02-17 09:02:11.652632552 +0000 UTC m=+1279.313393775" watchObservedRunningTime="2026-02-17 09:02:11.666659851 +0000 UTC m=+1279.327421074" Feb 17 09:02:11 crc kubenswrapper[4813]: I0217 09:02:11.997896 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-s6zhs" Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.064933 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx" Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.177499 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e12e1212-e6b3-4a1f-bd01-9983df669f5c-operator-scripts\") pod \"e12e1212-e6b3-4a1f-bd01-9983df669f5c\" (UID: \"e12e1212-e6b3-4a1f-bd01-9983df669f5c\") " Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.177610 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdgz2\" (UniqueName: \"kubernetes.io/projected/ea46d7cb-8914-4419-baa6-c7cbeefad3af-kube-api-access-zdgz2\") pod \"ea46d7cb-8914-4419-baa6-c7cbeefad3af\" (UID: \"ea46d7cb-8914-4419-baa6-c7cbeefad3af\") " Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.177721 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfb9j\" (UniqueName: \"kubernetes.io/projected/e12e1212-e6b3-4a1f-bd01-9983df669f5c-kube-api-access-bfb9j\") pod \"e12e1212-e6b3-4a1f-bd01-9983df669f5c\" (UID: \"e12e1212-e6b3-4a1f-bd01-9983df669f5c\") " Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.177772 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea46d7cb-8914-4419-baa6-c7cbeefad3af-operator-scripts\") pod \"ea46d7cb-8914-4419-baa6-c7cbeefad3af\" (UID: \"ea46d7cb-8914-4419-baa6-c7cbeefad3af\") " Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.178018 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e12e1212-e6b3-4a1f-bd01-9983df669f5c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e12e1212-e6b3-4a1f-bd01-9983df669f5c" (UID: "e12e1212-e6b3-4a1f-bd01-9983df669f5c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.178447 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e12e1212-e6b3-4a1f-bd01-9983df669f5c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.178518 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea46d7cb-8914-4419-baa6-c7cbeefad3af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea46d7cb-8914-4419-baa6-c7cbeefad3af" (UID: "ea46d7cb-8914-4419-baa6-c7cbeefad3af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.183663 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e12e1212-e6b3-4a1f-bd01-9983df669f5c-kube-api-access-bfb9j" (OuterVolumeSpecName: "kube-api-access-bfb9j") pod "e12e1212-e6b3-4a1f-bd01-9983df669f5c" (UID: "e12e1212-e6b3-4a1f-bd01-9983df669f5c"). InnerVolumeSpecName "kube-api-access-bfb9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.183721 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea46d7cb-8914-4419-baa6-c7cbeefad3af-kube-api-access-zdgz2" (OuterVolumeSpecName: "kube-api-access-zdgz2") pod "ea46d7cb-8914-4419-baa6-c7cbeefad3af" (UID: "ea46d7cb-8914-4419-baa6-c7cbeefad3af"). InnerVolumeSpecName "kube-api-access-zdgz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.279900 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea46d7cb-8914-4419-baa6-c7cbeefad3af-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.279933 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdgz2\" (UniqueName: \"kubernetes.io/projected/ea46d7cb-8914-4419-baa6-c7cbeefad3af-kube-api-access-zdgz2\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.279945 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfb9j\" (UniqueName: \"kubernetes.io/projected/e12e1212-e6b3-4a1f-bd01-9983df669f5c-kube-api-access-bfb9j\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.643753 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-s6zhs" Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.645018 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-s6zhs" event={"ID":"ea46d7cb-8914-4419-baa6-c7cbeefad3af","Type":"ContainerDied","Data":"f72572f075fdf6393a9e5d975acb80f514d0b6364004968279650e53cc8f6a34"} Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.645104 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f72572f075fdf6393a9e5d975acb80f514d0b6364004968279650e53cc8f6a34" Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.647391 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx" event={"ID":"e12e1212-e6b3-4a1f-bd01-9983df669f5c","Type":"ContainerDied","Data":"e3c900958f5638677cec909142b3ed856b171271fae9711ca610ea60d8927320"} Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.647424 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx" Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.647442 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3c900958f5638677cec909142b3ed856b171271fae9711ca610ea60d8927320" Feb 17 09:02:12 crc kubenswrapper[4813]: I0217 09:02:12.647572 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.143441 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jscfg"] Feb 17 09:02:14 crc kubenswrapper[4813]: E0217 09:02:14.144065 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea46d7cb-8914-4419-baa6-c7cbeefad3af" containerName="mariadb-database-create" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.144080 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea46d7cb-8914-4419-baa6-c7cbeefad3af" containerName="mariadb-database-create" Feb 17 09:02:14 crc kubenswrapper[4813]: E0217 09:02:14.144113 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12e1212-e6b3-4a1f-bd01-9983df669f5c" containerName="mariadb-account-create-update" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.144121 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12e1212-e6b3-4a1f-bd01-9983df669f5c" containerName="mariadb-account-create-update" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.144295 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12e1212-e6b3-4a1f-bd01-9983df669f5c" containerName="mariadb-account-create-update" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.144338 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea46d7cb-8914-4419-baa6-c7cbeefad3af" containerName="mariadb-database-create" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.144883 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.149496 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.149932 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-mb88z" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.158672 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jscfg"] Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.322161 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-746x6\" (UniqueName: \"kubernetes.io/projected/037fc18b-2e29-4234-8614-fd0eec194ce1-kube-api-access-746x6\") pod \"watcher-kuttl-db-sync-jscfg\" (UID: \"037fc18b-2e29-4234-8614-fd0eec194ce1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.322202 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-db-sync-config-data\") pod \"watcher-kuttl-db-sync-jscfg\" (UID: \"037fc18b-2e29-4234-8614-fd0eec194ce1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.322222 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-config-data\") pod \"watcher-kuttl-db-sync-jscfg\" (UID: \"037fc18b-2e29-4234-8614-fd0eec194ce1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.322243 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-jscfg\" (UID: \"037fc18b-2e29-4234-8614-fd0eec194ce1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.423661 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-746x6\" (UniqueName: \"kubernetes.io/projected/037fc18b-2e29-4234-8614-fd0eec194ce1-kube-api-access-746x6\") pod \"watcher-kuttl-db-sync-jscfg\" (UID: \"037fc18b-2e29-4234-8614-fd0eec194ce1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.423705 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-db-sync-config-data\") pod \"watcher-kuttl-db-sync-jscfg\" (UID: \"037fc18b-2e29-4234-8614-fd0eec194ce1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.423726 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-config-data\") pod \"watcher-kuttl-db-sync-jscfg\" (UID: \"037fc18b-2e29-4234-8614-fd0eec194ce1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.423755 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-jscfg\" (UID: \"037fc18b-2e29-4234-8614-fd0eec194ce1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.428784 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-db-sync-config-data\") pod \"watcher-kuttl-db-sync-jscfg\" (UID: \"037fc18b-2e29-4234-8614-fd0eec194ce1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.428942 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-jscfg\" (UID: \"037fc18b-2e29-4234-8614-fd0eec194ce1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.430082 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-config-data\") pod \"watcher-kuttl-db-sync-jscfg\" (UID: \"037fc18b-2e29-4234-8614-fd0eec194ce1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.438800 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-746x6\" (UniqueName: \"kubernetes.io/projected/037fc18b-2e29-4234-8614-fd0eec194ce1-kube-api-access-746x6\") pod \"watcher-kuttl-db-sync-jscfg\" (UID: \"037fc18b-2e29-4234-8614-fd0eec194ce1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" Feb 17 09:02:14 crc kubenswrapper[4813]: I0217 09:02:14.514959 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" Feb 17 09:02:15 crc kubenswrapper[4813]: I0217 09:02:15.023022 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jscfg"] Feb 17 09:02:15 crc kubenswrapper[4813]: W0217 09:02:15.030581 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod037fc18b_2e29_4234_8614_fd0eec194ce1.slice/crio-79d072a198a1cf40fa0efa3f2944a24e7ffc935053b0107ef75a867dd148ba97 WatchSource:0}: Error finding container 79d072a198a1cf40fa0efa3f2944a24e7ffc935053b0107ef75a867dd148ba97: Status 404 returned error can't find the container with id 79d072a198a1cf40fa0efa3f2944a24e7ffc935053b0107ef75a867dd148ba97 Feb 17 09:02:15 crc kubenswrapper[4813]: I0217 09:02:15.674405 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" event={"ID":"037fc18b-2e29-4234-8614-fd0eec194ce1","Type":"ContainerStarted","Data":"06491395c0afe5849882a1901e32892d7c9b748fbef264396b45f0193bdbeb20"} Feb 17 09:02:15 crc kubenswrapper[4813]: I0217 09:02:15.674685 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" event={"ID":"037fc18b-2e29-4234-8614-fd0eec194ce1","Type":"ContainerStarted","Data":"79d072a198a1cf40fa0efa3f2944a24e7ffc935053b0107ef75a867dd148ba97"} Feb 17 09:02:15 crc kubenswrapper[4813]: I0217 09:02:15.705937 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" podStartSLOduration=1.7059184109999999 podStartE2EDuration="1.705918411s" podCreationTimestamp="2026-02-17 09:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:02:15.701741622 +0000 UTC m=+1283.362502865" watchObservedRunningTime="2026-02-17 09:02:15.705918411 +0000 UTC m=+1283.366679634" Feb 17 09:02:17 crc kubenswrapper[4813]: I0217 09:02:17.693020 4813 generic.go:334] "Generic (PLEG): container finished" podID="037fc18b-2e29-4234-8614-fd0eec194ce1" containerID="06491395c0afe5849882a1901e32892d7c9b748fbef264396b45f0193bdbeb20" exitCode=0 Feb 17 09:02:17 crc kubenswrapper[4813]: I0217 09:02:17.693487 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" event={"ID":"037fc18b-2e29-4234-8614-fd0eec194ce1","Type":"ContainerDied","Data":"06491395c0afe5849882a1901e32892d7c9b748fbef264396b45f0193bdbeb20"} Feb 17 09:02:19 crc kubenswrapper[4813]: I0217 09:02:19.046676 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" Feb 17 09:02:19 crc kubenswrapper[4813]: I0217 09:02:19.189938 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-db-sync-config-data\") pod \"037fc18b-2e29-4234-8614-fd0eec194ce1\" (UID: \"037fc18b-2e29-4234-8614-fd0eec194ce1\") " Feb 17 09:02:19 crc kubenswrapper[4813]: I0217 09:02:19.189992 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-combined-ca-bundle\") pod \"037fc18b-2e29-4234-8614-fd0eec194ce1\" (UID: \"037fc18b-2e29-4234-8614-fd0eec194ce1\") " Feb 17 09:02:19 crc kubenswrapper[4813]: I0217 09:02:19.190484 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-746x6\" (UniqueName: \"kubernetes.io/projected/037fc18b-2e29-4234-8614-fd0eec194ce1-kube-api-access-746x6\") pod \"037fc18b-2e29-4234-8614-fd0eec194ce1\" (UID: \"037fc18b-2e29-4234-8614-fd0eec194ce1\") " Feb 17 09:02:19 crc kubenswrapper[4813]: I0217 09:02:19.190547 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-config-data\") pod \"037fc18b-2e29-4234-8614-fd0eec194ce1\" (UID: \"037fc18b-2e29-4234-8614-fd0eec194ce1\") " Feb 17 09:02:19 crc kubenswrapper[4813]: I0217 09:02:19.199150 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/037fc18b-2e29-4234-8614-fd0eec194ce1-kube-api-access-746x6" (OuterVolumeSpecName: "kube-api-access-746x6") pod "037fc18b-2e29-4234-8614-fd0eec194ce1" (UID: "037fc18b-2e29-4234-8614-fd0eec194ce1"). InnerVolumeSpecName "kube-api-access-746x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:02:19 crc kubenswrapper[4813]: I0217 09:02:19.204506 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "037fc18b-2e29-4234-8614-fd0eec194ce1" (UID: "037fc18b-2e29-4234-8614-fd0eec194ce1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:19 crc kubenswrapper[4813]: I0217 09:02:19.245350 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "037fc18b-2e29-4234-8614-fd0eec194ce1" (UID: "037fc18b-2e29-4234-8614-fd0eec194ce1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:19 crc kubenswrapper[4813]: I0217 09:02:19.267750 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-config-data" (OuterVolumeSpecName: "config-data") pod "037fc18b-2e29-4234-8614-fd0eec194ce1" (UID: "037fc18b-2e29-4234-8614-fd0eec194ce1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:19 crc kubenswrapper[4813]: I0217 09:02:19.293620 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:19 crc kubenswrapper[4813]: I0217 09:02:19.293669 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:19 crc kubenswrapper[4813]: I0217 09:02:19.293687 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-746x6\" (UniqueName: \"kubernetes.io/projected/037fc18b-2e29-4234-8614-fd0eec194ce1-kube-api-access-746x6\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:19 crc kubenswrapper[4813]: I0217 09:02:19.293707 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037fc18b-2e29-4234-8614-fd0eec194ce1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:19 crc kubenswrapper[4813]: I0217 09:02:19.708818 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" event={"ID":"037fc18b-2e29-4234-8614-fd0eec194ce1","Type":"ContainerDied","Data":"79d072a198a1cf40fa0efa3f2944a24e7ffc935053b0107ef75a867dd148ba97"} Feb 17 09:02:19 crc kubenswrapper[4813]: I0217 09:02:19.708856 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79d072a198a1cf40fa0efa3f2944a24e7ffc935053b0107ef75a867dd148ba97" Feb 17 09:02:19 crc kubenswrapper[4813]: I0217 09:02:19.708927 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jscfg" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.373518 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:02:20 crc kubenswrapper[4813]: E0217 09:02:20.374072 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037fc18b-2e29-4234-8614-fd0eec194ce1" containerName="watcher-kuttl-db-sync" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.374098 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="037fc18b-2e29-4234-8614-fd0eec194ce1" containerName="watcher-kuttl-db-sync" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.374462 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="037fc18b-2e29-4234-8614-fd0eec194ce1" containerName="watcher-kuttl-db-sync" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.375666 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.378624 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-mb88z" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.379266 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.388692 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.466897 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.468627 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.473727 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.475950 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.477009 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.480816 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.485095 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.489668 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.521103 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.521174 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.521196 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-logs\") pod \"watcher-kuttl-api-0\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.521237 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhrl5\" (UniqueName: \"kubernetes.io/projected/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-kube-api-access-dhrl5\") pod \"watcher-kuttl-api-0\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.521284 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.622427 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-logs\") pod \"watcher-kuttl-api-0\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.622494 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f03c620-782b-43d2-b1d0-2eda3feb85e3-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.622513 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6czl\" (UniqueName: \"kubernetes.io/projected/5f03c620-782b-43d2-b1d0-2eda3feb85e3-kube-api-access-h6czl\") pod \"watcher-kuttl-applier-0\" (UID: \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.622536 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.622560 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.622581 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhrl5\" (UniqueName: \"kubernetes.io/projected/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-kube-api-access-dhrl5\") pod \"watcher-kuttl-api-0\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.622598 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8469008e-d099-4b91-b304-645d0a759619-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.622682 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.622699 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f03c620-782b-43d2-b1d0-2eda3feb85e3-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.622731 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.622766 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.622853 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.622898 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zk9g\" (UniqueName: \"kubernetes.io/projected/8469008e-d099-4b91-b304-645d0a759619-kube-api-access-2zk9g\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.622920 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f03c620-782b-43d2-b1d0-2eda3feb85e3-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.623019 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-logs\") pod \"watcher-kuttl-api-0\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.626526 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.638228 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.638294 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.642857 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhrl5\" (UniqueName: \"kubernetes.io/projected/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-kube-api-access-dhrl5\") pod \"watcher-kuttl-api-0\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.701051 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.724208 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.724261 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zk9g\" (UniqueName: \"kubernetes.io/projected/8469008e-d099-4b91-b304-645d0a759619-kube-api-access-2zk9g\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.724279 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f03c620-782b-43d2-b1d0-2eda3feb85e3-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.724400 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6czl\" (UniqueName: \"kubernetes.io/projected/5f03c620-782b-43d2-b1d0-2eda3feb85e3-kube-api-access-h6czl\") pod \"watcher-kuttl-applier-0\" (UID: \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.724419 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f03c620-782b-43d2-b1d0-2eda3feb85e3-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.724439 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.724462 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.724480 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8469008e-d099-4b91-b304-645d0a759619-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.724533 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f03c620-782b-43d2-b1d0-2eda3feb85e3-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.724749 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f03c620-782b-43d2-b1d0-2eda3feb85e3-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.725412 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8469008e-d099-4b91-b304-645d0a759619-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.729380 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.730005 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f03c620-782b-43d2-b1d0-2eda3feb85e3-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.730261 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f03c620-782b-43d2-b1d0-2eda3feb85e3-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.731181 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.732347 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.743141 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6czl\" (UniqueName: \"kubernetes.io/projected/5f03c620-782b-43d2-b1d0-2eda3feb85e3-kube-api-access-h6czl\") pod \"watcher-kuttl-applier-0\" (UID: \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.743883 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zk9g\" (UniqueName: \"kubernetes.io/projected/8469008e-d099-4b91-b304-645d0a759619-kube-api-access-2zk9g\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.797030 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:20 crc kubenswrapper[4813]: I0217 09:02:20.813352 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:21 crc kubenswrapper[4813]: I0217 09:02:21.155484 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:02:21 crc kubenswrapper[4813]: I0217 09:02:21.334828 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:02:21 crc kubenswrapper[4813]: W0217 09:02:21.339746 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8469008e_d099_4b91_b304_645d0a759619.slice/crio-9fad73926130576f1cdd3d0398194fdc46c6b050b167079e4a6cf22d9ddc82fb WatchSource:0}: Error finding container 9fad73926130576f1cdd3d0398194fdc46c6b050b167079e4a6cf22d9ddc82fb: Status 404 returned error can't find the container with id 9fad73926130576f1cdd3d0398194fdc46c6b050b167079e4a6cf22d9ddc82fb Feb 17 09:02:21 crc kubenswrapper[4813]: I0217 09:02:21.343446 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:02:21 crc kubenswrapper[4813]: W0217 09:02:21.356921 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f03c620_782b_43d2_b1d0_2eda3feb85e3.slice/crio-a17fb6a6a7d7708ae4b07b9fc06924bfdc0d3144c164fbea646088ba733abcf4 WatchSource:0}: Error finding container a17fb6a6a7d7708ae4b07b9fc06924bfdc0d3144c164fbea646088ba733abcf4: Status 404 returned error can't find the container with id a17fb6a6a7d7708ae4b07b9fc06924bfdc0d3144c164fbea646088ba733abcf4 Feb 17 09:02:21 crc kubenswrapper[4813]: I0217 09:02:21.725277 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d92c2e8a-4e66-4290-b1db-60f9dd1b7091","Type":"ContainerStarted","Data":"318d3d9e6a241a3872668ba99b2974ff0f21e0db4ef2991e104e3bac287a3230"} Feb 17 09:02:21 crc kubenswrapper[4813]: I0217 09:02:21.725344 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d92c2e8a-4e66-4290-b1db-60f9dd1b7091","Type":"ContainerStarted","Data":"8119bac367e7ffcae9f76fdfa991025bf0cad7031a0ecbbdeeba40b2cad4b80d"} Feb 17 09:02:21 crc kubenswrapper[4813]: I0217 09:02:21.725356 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d92c2e8a-4e66-4290-b1db-60f9dd1b7091","Type":"ContainerStarted","Data":"a0b923e982068c12e3d341c545516c3f5aaf5b98c702d44fa95672902a2b58c6"} Feb 17 09:02:21 crc kubenswrapper[4813]: I0217 09:02:21.725928 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:21 crc kubenswrapper[4813]: I0217 09:02:21.726842 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8469008e-d099-4b91-b304-645d0a759619","Type":"ContainerStarted","Data":"6abf3c84b26ed9525cbc35fcc62e8ed7754a7fa33ead33bec1807d9dcf5eacf5"} Feb 17 09:02:21 crc kubenswrapper[4813]: I0217 09:02:21.726887 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8469008e-d099-4b91-b304-645d0a759619","Type":"ContainerStarted","Data":"9fad73926130576f1cdd3d0398194fdc46c6b050b167079e4a6cf22d9ddc82fb"} Feb 17 09:02:21 crc kubenswrapper[4813]: I0217 09:02:21.728513 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5f03c620-782b-43d2-b1d0-2eda3feb85e3","Type":"ContainerStarted","Data":"2ffaea5cd53954203d6434bfb534546429a988c8e779fd7440e43987cc81b5aa"} Feb 17 09:02:21 crc kubenswrapper[4813]: I0217 09:02:21.728557 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5f03c620-782b-43d2-b1d0-2eda3feb85e3","Type":"ContainerStarted","Data":"a17fb6a6a7d7708ae4b07b9fc06924bfdc0d3144c164fbea646088ba733abcf4"} Feb 17 09:02:21 crc kubenswrapper[4813]: I0217 09:02:21.743511 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.743460712 podStartE2EDuration="1.743460712s" podCreationTimestamp="2026-02-17 09:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:02:21.740417685 +0000 UTC m=+1289.401178908" watchObservedRunningTime="2026-02-17 09:02:21.743460712 +0000 UTC m=+1289.404221935" Feb 17 09:02:21 crc kubenswrapper[4813]: I0217 09:02:21.762214 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.7621895250000001 podStartE2EDuration="1.762189525s" podCreationTimestamp="2026-02-17 09:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:02:21.756587375 +0000 UTC m=+1289.417348598" watchObservedRunningTime="2026-02-17 09:02:21.762189525 +0000 UTC m=+1289.422950748" Feb 17 09:02:21 crc kubenswrapper[4813]: I0217 09:02:21.771295 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.771271783 podStartE2EDuration="1.771271783s" podCreationTimestamp="2026-02-17 09:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:02:21.770012267 +0000 UTC m=+1289.430773490" watchObservedRunningTime="2026-02-17 09:02:21.771271783 +0000 UTC m=+1289.432033006" Feb 17 09:02:23 crc kubenswrapper[4813]: I0217 09:02:23.848611 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:25 crc kubenswrapper[4813]: I0217 09:02:25.701909 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:25 crc kubenswrapper[4813]: I0217 09:02:25.798135 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:30 crc kubenswrapper[4813]: I0217 09:02:30.702888 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:30 crc kubenswrapper[4813]: I0217 09:02:30.709857 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:30 crc kubenswrapper[4813]: I0217 09:02:30.798739 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:30 crc kubenswrapper[4813]: I0217 09:02:30.813539 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:30 crc kubenswrapper[4813]: I0217 09:02:30.829940 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:30 crc kubenswrapper[4813]: I0217 09:02:30.842167 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:30 crc kubenswrapper[4813]: I0217 09:02:30.866674 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:30 crc kubenswrapper[4813]: I0217 09:02:30.901662 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:31 crc kubenswrapper[4813]: I0217 09:02:31.832258 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:31 crc kubenswrapper[4813]: I0217 09:02:31.867149 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:31 crc kubenswrapper[4813]: I0217 09:02:31.975553 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:02:31 crc kubenswrapper[4813]: I0217 09:02:31.975823 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="ceilometer-central-agent" containerID="cri-o://d630fda29b1576c42ce6919ad26f12ba86db7c1fe65c7c6e2545abfb441934db" gracePeriod=30 Feb 17 09:02:31 crc kubenswrapper[4813]: I0217 09:02:31.975903 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="ceilometer-notification-agent" containerID="cri-o://df923b084f4b92731e30f59aa6d5abb977bfc2b571b2d4720d2e7d8d72b2f8b2" gracePeriod=30 Feb 17 09:02:31 crc kubenswrapper[4813]: I0217 09:02:31.975935 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="proxy-httpd" containerID="cri-o://9c651d69c19286c392cf0a16b8bc0ba002e45d6601ff0b130a848ad746e45e4d" gracePeriod=30 Feb 17 09:02:31 crc kubenswrapper[4813]: I0217 09:02:31.975891 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="sg-core" containerID="cri-o://7139f89ee12027d1282389884c3917406ab267f0d5857f9f077df0a6a56e6fec" gracePeriod=30 Feb 17 09:02:31 crc kubenswrapper[4813]: I0217 09:02:31.982551 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.144:3000/\": EOF" Feb 17 09:02:32 crc kubenswrapper[4813]: I0217 09:02:32.848430 4813 generic.go:334] "Generic (PLEG): container finished" podID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerID="9c651d69c19286c392cf0a16b8bc0ba002e45d6601ff0b130a848ad746e45e4d" exitCode=0 Feb 17 09:02:32 crc kubenswrapper[4813]: I0217 09:02:32.848468 4813 generic.go:334] "Generic (PLEG): container finished" podID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerID="7139f89ee12027d1282389884c3917406ab267f0d5857f9f077df0a6a56e6fec" exitCode=2 Feb 17 09:02:32 crc kubenswrapper[4813]: I0217 09:02:32.848478 4813 generic.go:334] "Generic (PLEG): container finished" podID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerID="d630fda29b1576c42ce6919ad26f12ba86db7c1fe65c7c6e2545abfb441934db" exitCode=0 Feb 17 09:02:32 crc kubenswrapper[4813]: I0217 09:02:32.849464 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5408fa96-9ac7-4cf4-9348-7151a1e27ae5","Type":"ContainerDied","Data":"9c651d69c19286c392cf0a16b8bc0ba002e45d6601ff0b130a848ad746e45e4d"} Feb 17 09:02:32 crc kubenswrapper[4813]: I0217 09:02:32.849571 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5408fa96-9ac7-4cf4-9348-7151a1e27ae5","Type":"ContainerDied","Data":"7139f89ee12027d1282389884c3917406ab267f0d5857f9f077df0a6a56e6fec"} Feb 17 09:02:32 crc kubenswrapper[4813]: I0217 09:02:32.849587 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5408fa96-9ac7-4cf4-9348-7151a1e27ae5","Type":"ContainerDied","Data":"d630fda29b1576c42ce6919ad26f12ba86db7c1fe65c7c6e2545abfb441934db"} Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.148966 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jscfg"] Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.156503 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jscfg"] Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.197017 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher25d0-account-delete-pnkcc"] Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.198210 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher25d0-account-delete-pnkcc" Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.218362 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher25d0-account-delete-pnkcc"] Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.260991 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.261164 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="5f03c620-782b-43d2-b1d0-2eda3feb85e3" containerName="watcher-applier" containerID="cri-o://2ffaea5cd53954203d6434bfb534546429a988c8e779fd7440e43987cc81b5aa" gracePeriod=30 Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.322326 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.322511 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="d92c2e8a-4e66-4290-b1db-60f9dd1b7091" containerName="watcher-kuttl-api-log" containerID="cri-o://8119bac367e7ffcae9f76fdfa991025bf0cad7031a0ecbbdeeba40b2cad4b80d" gracePeriod=30 Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.322855 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="d92c2e8a-4e66-4290-b1db-60f9dd1b7091" containerName="watcher-api" containerID="cri-o://318d3d9e6a241a3872668ba99b2974ff0f21e0db4ef2991e104e3bac287a3230" gracePeriod=30 Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.335200 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.352410 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rndg\" (UniqueName: \"kubernetes.io/projected/75bc150e-da7f-48ca-a481-c89057aaa371-kube-api-access-5rndg\") pod \"watcher25d0-account-delete-pnkcc\" (UID: \"75bc150e-da7f-48ca-a481-c89057aaa371\") " pod="watcher-kuttl-default/watcher25d0-account-delete-pnkcc" Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.352471 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75bc150e-da7f-48ca-a481-c89057aaa371-operator-scripts\") pod \"watcher25d0-account-delete-pnkcc\" (UID: \"75bc150e-da7f-48ca-a481-c89057aaa371\") " pod="watcher-kuttl-default/watcher25d0-account-delete-pnkcc" Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.454360 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rndg\" (UniqueName: \"kubernetes.io/projected/75bc150e-da7f-48ca-a481-c89057aaa371-kube-api-access-5rndg\") pod \"watcher25d0-account-delete-pnkcc\" (UID: \"75bc150e-da7f-48ca-a481-c89057aaa371\") " pod="watcher-kuttl-default/watcher25d0-account-delete-pnkcc" Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.454414 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75bc150e-da7f-48ca-a481-c89057aaa371-operator-scripts\") pod \"watcher25d0-account-delete-pnkcc\" (UID: \"75bc150e-da7f-48ca-a481-c89057aaa371\") " pod="watcher-kuttl-default/watcher25d0-account-delete-pnkcc" Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.455242 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75bc150e-da7f-48ca-a481-c89057aaa371-operator-scripts\") pod \"watcher25d0-account-delete-pnkcc\" (UID: \"75bc150e-da7f-48ca-a481-c89057aaa371\") " pod="watcher-kuttl-default/watcher25d0-account-delete-pnkcc" Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.473643 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rndg\" (UniqueName: \"kubernetes.io/projected/75bc150e-da7f-48ca-a481-c89057aaa371-kube-api-access-5rndg\") pod \"watcher25d0-account-delete-pnkcc\" (UID: \"75bc150e-da7f-48ca-a481-c89057aaa371\") " pod="watcher-kuttl-default/watcher25d0-account-delete-pnkcc" Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.525438 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher25d0-account-delete-pnkcc" Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.893879 4813 generic.go:334] "Generic (PLEG): container finished" podID="d92c2e8a-4e66-4290-b1db-60f9dd1b7091" containerID="8119bac367e7ffcae9f76fdfa991025bf0cad7031a0ecbbdeeba40b2cad4b80d" exitCode=143 Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.893958 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d92c2e8a-4e66-4290-b1db-60f9dd1b7091","Type":"ContainerDied","Data":"8119bac367e7ffcae9f76fdfa991025bf0cad7031a0ecbbdeeba40b2cad4b80d"} Feb 17 09:02:33 crc kubenswrapper[4813]: I0217 09:02:33.894579 4813 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-mb88z\" not found" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.054703 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher25d0-account-delete-pnkcc"] Feb 17 09:02:34 crc kubenswrapper[4813]: W0217 09:02:34.056499 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75bc150e_da7f_48ca_a481_c89057aaa371.slice/crio-794a147d63861d8f51dd1cbfcd01c9b892b6f86026c74a87b0c8046d73bb5069 WatchSource:0}: Error finding container 794a147d63861d8f51dd1cbfcd01c9b892b6f86026c74a87b0c8046d73bb5069: Status 404 returned error can't find the container with id 794a147d63861d8f51dd1cbfcd01c9b892b6f86026c74a87b0c8046d73bb5069 Feb 17 09:02:34 crc kubenswrapper[4813]: E0217 09:02:34.070589 4813 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:02:34 crc kubenswrapper[4813]: E0217 09:02:34.070675 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-config-data podName:8469008e-d099-4b91-b304-645d0a759619 nodeName:}" failed. No retries permitted until 2026-02-17 09:02:34.570654813 +0000 UTC m=+1302.231416036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "8469008e-d099-4b91-b304-645d0a759619") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.407855 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.576880 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-combined-ca-bundle\") pod \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.577146 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhrl5\" (UniqueName: \"kubernetes.io/projected/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-kube-api-access-dhrl5\") pod \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.577196 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-custom-prometheus-ca\") pod \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.577287 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-logs\") pod \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.577350 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-config-data\") pod \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\" (UID: \"d92c2e8a-4e66-4290-b1db-60f9dd1b7091\") " Feb 17 09:02:34 crc kubenswrapper[4813]: E0217 09:02:34.577735 4813 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:02:34 crc kubenswrapper[4813]: E0217 09:02:34.577796 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-config-data podName:8469008e-d099-4b91-b304-645d0a759619 nodeName:}" failed. No retries permitted until 2026-02-17 09:02:35.577778651 +0000 UTC m=+1303.238539874 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "8469008e-d099-4b91-b304-645d0a759619") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.578095 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-logs" (OuterVolumeSpecName: "logs") pod "d92c2e8a-4e66-4290-b1db-60f9dd1b7091" (UID: "d92c2e8a-4e66-4290-b1db-60f9dd1b7091"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.595472 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-kube-api-access-dhrl5" (OuterVolumeSpecName: "kube-api-access-dhrl5") pod "d92c2e8a-4e66-4290-b1db-60f9dd1b7091" (UID: "d92c2e8a-4e66-4290-b1db-60f9dd1b7091"). InnerVolumeSpecName "kube-api-access-dhrl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.601943 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "d92c2e8a-4e66-4290-b1db-60f9dd1b7091" (UID: "d92c2e8a-4e66-4290-b1db-60f9dd1b7091"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.634558 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-config-data" (OuterVolumeSpecName: "config-data") pod "d92c2e8a-4e66-4290-b1db-60f9dd1b7091" (UID: "d92c2e8a-4e66-4290-b1db-60f9dd1b7091"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.642754 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d92c2e8a-4e66-4290-b1db-60f9dd1b7091" (UID: "d92c2e8a-4e66-4290-b1db-60f9dd1b7091"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.679192 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.679231 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhrl5\" (UniqueName: \"kubernetes.io/projected/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-kube-api-access-dhrl5\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.679245 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.679264 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.679276 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92c2e8a-4e66-4290-b1db-60f9dd1b7091-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.904910 4813 generic.go:334] "Generic (PLEG): container finished" podID="75bc150e-da7f-48ca-a481-c89057aaa371" containerID="44fb9bc493af8b1137a41097d3f6fcc2dae062733ea721bcd96c23e5ad5675b2" exitCode=0 Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.904958 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher25d0-account-delete-pnkcc" event={"ID":"75bc150e-da7f-48ca-a481-c89057aaa371","Type":"ContainerDied","Data":"44fb9bc493af8b1137a41097d3f6fcc2dae062733ea721bcd96c23e5ad5675b2"} Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.905006 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher25d0-account-delete-pnkcc" event={"ID":"75bc150e-da7f-48ca-a481-c89057aaa371","Type":"ContainerStarted","Data":"794a147d63861d8f51dd1cbfcd01c9b892b6f86026c74a87b0c8046d73bb5069"} Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.907046 4813 generic.go:334] "Generic (PLEG): container finished" podID="d92c2e8a-4e66-4290-b1db-60f9dd1b7091" containerID="318d3d9e6a241a3872668ba99b2974ff0f21e0db4ef2991e104e3bac287a3230" exitCode=0 Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.907255 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d92c2e8a-4e66-4290-b1db-60f9dd1b7091","Type":"ContainerDied","Data":"318d3d9e6a241a3872668ba99b2974ff0f21e0db4ef2991e104e3bac287a3230"} Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.907455 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d92c2e8a-4e66-4290-b1db-60f9dd1b7091","Type":"ContainerDied","Data":"a0b923e982068c12e3d341c545516c3f5aaf5b98c702d44fa95672902a2b58c6"} Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.907539 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.907334 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="8469008e-d099-4b91-b304-645d0a759619" containerName="watcher-decision-engine" containerID="cri-o://6abf3c84b26ed9525cbc35fcc62e8ed7754a7fa33ead33bec1807d9dcf5eacf5" gracePeriod=30 Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.907816 4813 scope.go:117] "RemoveContainer" containerID="318d3d9e6a241a3872668ba99b2974ff0f21e0db4ef2991e104e3bac287a3230" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.940551 4813 scope.go:117] "RemoveContainer" containerID="8119bac367e7ffcae9f76fdfa991025bf0cad7031a0ecbbdeeba40b2cad4b80d" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.955536 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.965233 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.979491 4813 scope.go:117] "RemoveContainer" containerID="318d3d9e6a241a3872668ba99b2974ff0f21e0db4ef2991e104e3bac287a3230" Feb 17 09:02:34 crc kubenswrapper[4813]: E0217 09:02:34.981571 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"318d3d9e6a241a3872668ba99b2974ff0f21e0db4ef2991e104e3bac287a3230\": container with ID starting with 318d3d9e6a241a3872668ba99b2974ff0f21e0db4ef2991e104e3bac287a3230 not found: ID does not exist" containerID="318d3d9e6a241a3872668ba99b2974ff0f21e0db4ef2991e104e3bac287a3230" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.981773 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"318d3d9e6a241a3872668ba99b2974ff0f21e0db4ef2991e104e3bac287a3230"} err="failed to get container status \"318d3d9e6a241a3872668ba99b2974ff0f21e0db4ef2991e104e3bac287a3230\": rpc error: code = NotFound desc = could not find container \"318d3d9e6a241a3872668ba99b2974ff0f21e0db4ef2991e104e3bac287a3230\": container with ID starting with 318d3d9e6a241a3872668ba99b2974ff0f21e0db4ef2991e104e3bac287a3230 not found: ID does not exist" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.981922 4813 scope.go:117] "RemoveContainer" containerID="8119bac367e7ffcae9f76fdfa991025bf0cad7031a0ecbbdeeba40b2cad4b80d" Feb 17 09:02:34 crc kubenswrapper[4813]: E0217 09:02:34.983369 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8119bac367e7ffcae9f76fdfa991025bf0cad7031a0ecbbdeeba40b2cad4b80d\": container with ID starting with 8119bac367e7ffcae9f76fdfa991025bf0cad7031a0ecbbdeeba40b2cad4b80d not found: ID does not exist" containerID="8119bac367e7ffcae9f76fdfa991025bf0cad7031a0ecbbdeeba40b2cad4b80d" Feb 17 09:02:34 crc kubenswrapper[4813]: I0217 09:02:34.983541 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8119bac367e7ffcae9f76fdfa991025bf0cad7031a0ecbbdeeba40b2cad4b80d"} err="failed to get container status \"8119bac367e7ffcae9f76fdfa991025bf0cad7031a0ecbbdeeba40b2cad4b80d\": rpc error: code = NotFound desc = could not find container \"8119bac367e7ffcae9f76fdfa991025bf0cad7031a0ecbbdeeba40b2cad4b80d\": container with ID starting with 8119bac367e7ffcae9f76fdfa991025bf0cad7031a0ecbbdeeba40b2cad4b80d not found: ID does not exist" Feb 17 09:02:35 crc kubenswrapper[4813]: I0217 09:02:35.124240 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="037fc18b-2e29-4234-8614-fd0eec194ce1" path="/var/lib/kubelet/pods/037fc18b-2e29-4234-8614-fd0eec194ce1/volumes" Feb 17 09:02:35 crc kubenswrapper[4813]: I0217 09:02:35.125622 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d92c2e8a-4e66-4290-b1db-60f9dd1b7091" path="/var/lib/kubelet/pods/d92c2e8a-4e66-4290-b1db-60f9dd1b7091/volumes" Feb 17 09:02:35 crc kubenswrapper[4813]: I0217 09:02:35.166140 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 09:02:35 crc kubenswrapper[4813]: I0217 09:02:35.166646 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 09:02:35 crc kubenswrapper[4813]: E0217 09:02:35.601066 4813 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:02:35 crc kubenswrapper[4813]: E0217 09:02:35.601171 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-config-data podName:8469008e-d099-4b91-b304-645d0a759619 nodeName:}" failed. No retries permitted until 2026-02-17 09:02:37.601145806 +0000 UTC m=+1305.261907049 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "8469008e-d099-4b91-b304-645d0a759619") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:02:35 crc kubenswrapper[4813]: E0217 09:02:35.800718 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2ffaea5cd53954203d6434bfb534546429a988c8e779fd7440e43987cc81b5aa" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:02:35 crc kubenswrapper[4813]: E0217 09:02:35.804100 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2ffaea5cd53954203d6434bfb534546429a988c8e779fd7440e43987cc81b5aa" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:02:35 crc kubenswrapper[4813]: E0217 09:02:35.805451 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2ffaea5cd53954203d6434bfb534546429a988c8e779fd7440e43987cc81b5aa" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:02:35 crc kubenswrapper[4813]: E0217 09:02:35.805492 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="5f03c620-782b-43d2-b1d0-2eda3feb85e3" containerName="watcher-applier" Feb 17 09:02:35 crc kubenswrapper[4813]: I0217 09:02:35.917050 4813 generic.go:334] "Generic (PLEG): container finished" podID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerID="df923b084f4b92731e30f59aa6d5abb977bfc2b571b2d4720d2e7d8d72b2f8b2" exitCode=0 Feb 17 09:02:35 crc kubenswrapper[4813]: I0217 09:02:35.917114 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5408fa96-9ac7-4cf4-9348-7151a1e27ae5","Type":"ContainerDied","Data":"df923b084f4b92731e30f59aa6d5abb977bfc2b571b2d4720d2e7d8d72b2f8b2"} Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.044566 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.227923 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-combined-ca-bundle\") pod \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.228009 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-log-httpd\") pod \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.228094 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-config-data\") pod \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.228142 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-scripts\") pod \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.228202 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z5qs\" (UniqueName: \"kubernetes.io/projected/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-kube-api-access-5z5qs\") pod \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.228274 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-run-httpd\") pod \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.228297 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-sg-core-conf-yaml\") pod \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.228341 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-ceilometer-tls-certs\") pod \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\" (UID: \"5408fa96-9ac7-4cf4-9348-7151a1e27ae5\") " Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.228967 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5408fa96-9ac7-4cf4-9348-7151a1e27ae5" (UID: "5408fa96-9ac7-4cf4-9348-7151a1e27ae5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.233365 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-scripts" (OuterVolumeSpecName: "scripts") pod "5408fa96-9ac7-4cf4-9348-7151a1e27ae5" (UID: "5408fa96-9ac7-4cf4-9348-7151a1e27ae5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.233639 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5408fa96-9ac7-4cf4-9348-7151a1e27ae5" (UID: "5408fa96-9ac7-4cf4-9348-7151a1e27ae5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.236292 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-kube-api-access-5z5qs" (OuterVolumeSpecName: "kube-api-access-5z5qs") pod "5408fa96-9ac7-4cf4-9348-7151a1e27ae5" (UID: "5408fa96-9ac7-4cf4-9348-7151a1e27ae5"). InnerVolumeSpecName "kube-api-access-5z5qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.279471 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5408fa96-9ac7-4cf4-9348-7151a1e27ae5" (UID: "5408fa96-9ac7-4cf4-9348-7151a1e27ae5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.330591 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.330615 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z5qs\" (UniqueName: \"kubernetes.io/projected/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-kube-api-access-5z5qs\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.330624 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.330633 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.330641 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.331388 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher25d0-account-delete-pnkcc" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.373345 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5408fa96-9ac7-4cf4-9348-7151a1e27ae5" (UID: "5408fa96-9ac7-4cf4-9348-7151a1e27ae5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.401488 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5408fa96-9ac7-4cf4-9348-7151a1e27ae5" (UID: "5408fa96-9ac7-4cf4-9348-7151a1e27ae5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.426163 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-config-data" (OuterVolumeSpecName: "config-data") pod "5408fa96-9ac7-4cf4-9348-7151a1e27ae5" (UID: "5408fa96-9ac7-4cf4-9348-7151a1e27ae5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.431790 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75bc150e-da7f-48ca-a481-c89057aaa371-operator-scripts\") pod \"75bc150e-da7f-48ca-a481-c89057aaa371\" (UID: \"75bc150e-da7f-48ca-a481-c89057aaa371\") " Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.431971 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rndg\" (UniqueName: \"kubernetes.io/projected/75bc150e-da7f-48ca-a481-c89057aaa371-kube-api-access-5rndg\") pod \"75bc150e-da7f-48ca-a481-c89057aaa371\" (UID: \"75bc150e-da7f-48ca-a481-c89057aaa371\") " Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.432354 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.432373 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.432382 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5408fa96-9ac7-4cf4-9348-7151a1e27ae5-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.432647 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75bc150e-da7f-48ca-a481-c89057aaa371-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75bc150e-da7f-48ca-a481-c89057aaa371" (UID: "75bc150e-da7f-48ca-a481-c89057aaa371"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.440471 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75bc150e-da7f-48ca-a481-c89057aaa371-kube-api-access-5rndg" (OuterVolumeSpecName: "kube-api-access-5rndg") pod "75bc150e-da7f-48ca-a481-c89057aaa371" (UID: "75bc150e-da7f-48ca-a481-c89057aaa371"). InnerVolumeSpecName "kube-api-access-5rndg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.534303 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rndg\" (UniqueName: \"kubernetes.io/projected/75bc150e-da7f-48ca-a481-c89057aaa371-kube-api-access-5rndg\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.534364 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75bc150e-da7f-48ca-a481-c89057aaa371-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.932775 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5408fa96-9ac7-4cf4-9348-7151a1e27ae5","Type":"ContainerDied","Data":"98c724c7dfa41c2867e927c0c071ebdc3823f594998c96dd2f8de72a56dcbf3d"} Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.933109 4813 scope.go:117] "RemoveContainer" containerID="9c651d69c19286c392cf0a16b8bc0ba002e45d6601ff0b130a848ad746e45e4d" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.933280 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.939297 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher25d0-account-delete-pnkcc" event={"ID":"75bc150e-da7f-48ca-a481-c89057aaa371","Type":"ContainerDied","Data":"794a147d63861d8f51dd1cbfcd01c9b892b6f86026c74a87b0c8046d73bb5069"} Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.939363 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="794a147d63861d8f51dd1cbfcd01c9b892b6f86026c74a87b0c8046d73bb5069" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.939426 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher25d0-account-delete-pnkcc" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.972558 4813 scope.go:117] "RemoveContainer" containerID="7139f89ee12027d1282389884c3917406ab267f0d5857f9f077df0a6a56e6fec" Feb 17 09:02:36 crc kubenswrapper[4813]: I0217 09:02:36.996369 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.004637 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.006243 4813 scope.go:117] "RemoveContainer" containerID="df923b084f4b92731e30f59aa6d5abb977bfc2b571b2d4720d2e7d8d72b2f8b2" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.021254 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:02:37 crc kubenswrapper[4813]: E0217 09:02:37.022233 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92c2e8a-4e66-4290-b1db-60f9dd1b7091" containerName="watcher-api" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.022255 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92c2e8a-4e66-4290-b1db-60f9dd1b7091" containerName="watcher-api" Feb 17 09:02:37 crc kubenswrapper[4813]: E0217 09:02:37.022286 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="ceilometer-notification-agent" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.022296 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="ceilometer-notification-agent" Feb 17 09:02:37 crc kubenswrapper[4813]: E0217 09:02:37.022340 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92c2e8a-4e66-4290-b1db-60f9dd1b7091" containerName="watcher-kuttl-api-log" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.022351 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92c2e8a-4e66-4290-b1db-60f9dd1b7091" containerName="watcher-kuttl-api-log" Feb 17 09:02:37 crc kubenswrapper[4813]: E0217 09:02:37.022378 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="proxy-httpd" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.022386 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="proxy-httpd" Feb 17 09:02:37 crc kubenswrapper[4813]: E0217 09:02:37.022423 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="sg-core" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.022431 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="sg-core" Feb 17 09:02:37 crc kubenswrapper[4813]: E0217 09:02:37.022453 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="ceilometer-central-agent" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.022463 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="ceilometer-central-agent" Feb 17 09:02:37 crc kubenswrapper[4813]: E0217 09:02:37.022478 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75bc150e-da7f-48ca-a481-c89057aaa371" containerName="mariadb-account-delete" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.022487 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="75bc150e-da7f-48ca-a481-c89057aaa371" containerName="mariadb-account-delete" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.036543 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="proxy-httpd" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.036591 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92c2e8a-4e66-4290-b1db-60f9dd1b7091" containerName="watcher-kuttl-api-log" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.036615 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="ceilometer-notification-agent" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.036686 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="75bc150e-da7f-48ca-a481-c89057aaa371" containerName="mariadb-account-delete" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.036713 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92c2e8a-4e66-4290-b1db-60f9dd1b7091" containerName="watcher-api" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.036744 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="sg-core" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.036770 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" containerName="ceilometer-central-agent" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.044430 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.049062 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.049515 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.049868 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.055222 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.055275 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-config-data\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.055347 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f79tz\" (UniqueName: \"kubernetes.io/projected/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-kube-api-access-f79tz\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.055387 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-log-httpd\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.055403 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.055441 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-run-httpd\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.055463 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-scripts\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.055549 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.059603 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.062408 4813 scope.go:117] "RemoveContainer" containerID="d630fda29b1576c42ce6919ad26f12ba86db7c1fe65c7c6e2545abfb441934db" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.120696 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5408fa96-9ac7-4cf4-9348-7151a1e27ae5" path="/var/lib/kubelet/pods/5408fa96-9ac7-4cf4-9348-7151a1e27ae5/volumes" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.156565 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.156629 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.156687 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-config-data\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.156718 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f79tz\" (UniqueName: \"kubernetes.io/projected/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-kube-api-access-f79tz\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.156757 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-log-httpd\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.156771 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.156794 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-run-httpd\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.156811 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-scripts\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.157458 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-log-httpd\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.161677 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-run-httpd\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.163218 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.163401 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.166020 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-config-data\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.167748 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-scripts\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.167756 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.178999 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f79tz\" (UniqueName: \"kubernetes.io/projected/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-kube-api-access-f79tz\") pod \"ceilometer-0\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.395754 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:37 crc kubenswrapper[4813]: E0217 09:02:37.664097 4813 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:02:37 crc kubenswrapper[4813]: E0217 09:02:37.664180 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-config-data podName:8469008e-d099-4b91-b304-645d0a759619 nodeName:}" failed. No retries permitted until 2026-02-17 09:02:41.66415957 +0000 UTC m=+1309.324920793 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "8469008e-d099-4b91-b304-645d0a759619") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.803794 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.867367 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6czl\" (UniqueName: \"kubernetes.io/projected/5f03c620-782b-43d2-b1d0-2eda3feb85e3-kube-api-access-h6czl\") pod \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\" (UID: \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\") " Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.867418 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f03c620-782b-43d2-b1d0-2eda3feb85e3-combined-ca-bundle\") pod \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\" (UID: \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\") " Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.867489 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f03c620-782b-43d2-b1d0-2eda3feb85e3-config-data\") pod \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\" (UID: \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\") " Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.867558 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f03c620-782b-43d2-b1d0-2eda3feb85e3-logs\") pod \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\" (UID: \"5f03c620-782b-43d2-b1d0-2eda3feb85e3\") " Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.868997 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f03c620-782b-43d2-b1d0-2eda3feb85e3-logs" (OuterVolumeSpecName: "logs") pod "5f03c620-782b-43d2-b1d0-2eda3feb85e3" (UID: "5f03c620-782b-43d2-b1d0-2eda3feb85e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.875615 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f03c620-782b-43d2-b1d0-2eda3feb85e3-kube-api-access-h6czl" (OuterVolumeSpecName: "kube-api-access-h6czl") pod "5f03c620-782b-43d2-b1d0-2eda3feb85e3" (UID: "5f03c620-782b-43d2-b1d0-2eda3feb85e3"). InnerVolumeSpecName "kube-api-access-h6czl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.902720 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.903245 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f03c620-782b-43d2-b1d0-2eda3feb85e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f03c620-782b-43d2-b1d0-2eda3feb85e3" (UID: "5f03c620-782b-43d2-b1d0-2eda3feb85e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.907954 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f03c620-782b-43d2-b1d0-2eda3feb85e3-config-data" (OuterVolumeSpecName: "config-data") pod "5f03c620-782b-43d2-b1d0-2eda3feb85e3" (UID: "5f03c620-782b-43d2-b1d0-2eda3feb85e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:37 crc kubenswrapper[4813]: W0217 09:02:37.909285 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b0af28b_1f27_4bc5_a3be_719d6be5cd9b.slice/crio-c4f4928d1919e60f199dfd5de4fe2b17a7800f7a4bf2e67bb40e8cd03a94b86f WatchSource:0}: Error finding container c4f4928d1919e60f199dfd5de4fe2b17a7800f7a4bf2e67bb40e8cd03a94b86f: Status 404 returned error can't find the container with id c4f4928d1919e60f199dfd5de4fe2b17a7800f7a4bf2e67bb40e8cd03a94b86f Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.950507 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b","Type":"ContainerStarted","Data":"c4f4928d1919e60f199dfd5de4fe2b17a7800f7a4bf2e67bb40e8cd03a94b86f"} Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.951994 4813 generic.go:334] "Generic (PLEG): container finished" podID="5f03c620-782b-43d2-b1d0-2eda3feb85e3" containerID="2ffaea5cd53954203d6434bfb534546429a988c8e779fd7440e43987cc81b5aa" exitCode=0 Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.952039 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5f03c620-782b-43d2-b1d0-2eda3feb85e3","Type":"ContainerDied","Data":"2ffaea5cd53954203d6434bfb534546429a988c8e779fd7440e43987cc81b5aa"} Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.952066 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5f03c620-782b-43d2-b1d0-2eda3feb85e3","Type":"ContainerDied","Data":"a17fb6a6a7d7708ae4b07b9fc06924bfdc0d3144c164fbea646088ba733abcf4"} Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.952104 4813 scope.go:117] "RemoveContainer" containerID="2ffaea5cd53954203d6434bfb534546429a988c8e779fd7440e43987cc81b5aa" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.952138 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.969803 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6czl\" (UniqueName: \"kubernetes.io/projected/5f03c620-782b-43d2-b1d0-2eda3feb85e3-kube-api-access-h6czl\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.969827 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f03c620-782b-43d2-b1d0-2eda3feb85e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.969854 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f03c620-782b-43d2-b1d0-2eda3feb85e3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.969862 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f03c620-782b-43d2-b1d0-2eda3feb85e3-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.984448 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.986922 4813 scope.go:117] "RemoveContainer" containerID="2ffaea5cd53954203d6434bfb534546429a988c8e779fd7440e43987cc81b5aa" Feb 17 09:02:37 crc kubenswrapper[4813]: E0217 09:02:37.994722 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ffaea5cd53954203d6434bfb534546429a988c8e779fd7440e43987cc81b5aa\": container with ID starting with 2ffaea5cd53954203d6434bfb534546429a988c8e779fd7440e43987cc81b5aa not found: ID does not exist" containerID="2ffaea5cd53954203d6434bfb534546429a988c8e779fd7440e43987cc81b5aa" Feb 17 09:02:37 crc kubenswrapper[4813]: I0217 09:02:37.994768 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ffaea5cd53954203d6434bfb534546429a988c8e779fd7440e43987cc81b5aa"} err="failed to get container status \"2ffaea5cd53954203d6434bfb534546429a988c8e779fd7440e43987cc81b5aa\": rpc error: code = NotFound desc = could not find container \"2ffaea5cd53954203d6434bfb534546429a988c8e779fd7440e43987cc81b5aa\": container with ID starting with 2ffaea5cd53954203d6434bfb534546429a988c8e779fd7440e43987cc81b5aa not found: ID does not exist" Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.001159 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.256738 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx"] Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.263152 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher25d0-account-delete-pnkcc"] Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.269727 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-s6zhs"] Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.275653 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-25d0-account-create-update-vzlsx"] Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.285232 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher25d0-account-delete-pnkcc"] Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.292732 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-s6zhs"] Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.624001 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.685867 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-combined-ca-bundle\") pod \"8469008e-d099-4b91-b304-645d0a759619\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.685950 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-config-data\") pod \"8469008e-d099-4b91-b304-645d0a759619\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.686085 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8469008e-d099-4b91-b304-645d0a759619-logs\") pod \"8469008e-d099-4b91-b304-645d0a759619\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.686129 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zk9g\" (UniqueName: \"kubernetes.io/projected/8469008e-d099-4b91-b304-645d0a759619-kube-api-access-2zk9g\") pod \"8469008e-d099-4b91-b304-645d0a759619\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.686380 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-custom-prometheus-ca\") pod \"8469008e-d099-4b91-b304-645d0a759619\" (UID: \"8469008e-d099-4b91-b304-645d0a759619\") " Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.687911 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8469008e-d099-4b91-b304-645d0a759619-logs" (OuterVolumeSpecName: "logs") pod "8469008e-d099-4b91-b304-645d0a759619" (UID: "8469008e-d099-4b91-b304-645d0a759619"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.690579 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8469008e-d099-4b91-b304-645d0a759619-kube-api-access-2zk9g" (OuterVolumeSpecName: "kube-api-access-2zk9g") pod "8469008e-d099-4b91-b304-645d0a759619" (UID: "8469008e-d099-4b91-b304-645d0a759619"). InnerVolumeSpecName "kube-api-access-2zk9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.722123 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8469008e-d099-4b91-b304-645d0a759619" (UID: "8469008e-d099-4b91-b304-645d0a759619"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.733863 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8469008e-d099-4b91-b304-645d0a759619" (UID: "8469008e-d099-4b91-b304-645d0a759619"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.752688 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-config-data" (OuterVolumeSpecName: "config-data") pod "8469008e-d099-4b91-b304-645d0a759619" (UID: "8469008e-d099-4b91-b304-645d0a759619"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.788373 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.788410 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.788426 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8469008e-d099-4b91-b304-645d0a759619-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.788439 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8469008e-d099-4b91-b304-645d0a759619-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.788451 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zk9g\" (UniqueName: \"kubernetes.io/projected/8469008e-d099-4b91-b304-645d0a759619-kube-api-access-2zk9g\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.961428 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b","Type":"ContainerStarted","Data":"63ab09f59a9c4114e5bc8f2bef993ffed37d76807dec985a163960a1e7473978"} Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.964526 4813 generic.go:334] "Generic (PLEG): container finished" podID="8469008e-d099-4b91-b304-645d0a759619" containerID="6abf3c84b26ed9525cbc35fcc62e8ed7754a7fa33ead33bec1807d9dcf5eacf5" exitCode=0 Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.964569 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8469008e-d099-4b91-b304-645d0a759619","Type":"ContainerDied","Data":"6abf3c84b26ed9525cbc35fcc62e8ed7754a7fa33ead33bec1807d9dcf5eacf5"} Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.964601 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.964618 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8469008e-d099-4b91-b304-645d0a759619","Type":"ContainerDied","Data":"9fad73926130576f1cdd3d0398194fdc46c6b050b167079e4a6cf22d9ddc82fb"} Feb 17 09:02:38 crc kubenswrapper[4813]: I0217 09:02:38.964643 4813 scope.go:117] "RemoveContainer" containerID="6abf3c84b26ed9525cbc35fcc62e8ed7754a7fa33ead33bec1807d9dcf5eacf5" Feb 17 09:02:39 crc kubenswrapper[4813]: I0217 09:02:39.004850 4813 scope.go:117] "RemoveContainer" containerID="6abf3c84b26ed9525cbc35fcc62e8ed7754a7fa33ead33bec1807d9dcf5eacf5" Feb 17 09:02:39 crc kubenswrapper[4813]: E0217 09:02:39.006579 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6abf3c84b26ed9525cbc35fcc62e8ed7754a7fa33ead33bec1807d9dcf5eacf5\": container with ID starting with 6abf3c84b26ed9525cbc35fcc62e8ed7754a7fa33ead33bec1807d9dcf5eacf5 not found: ID does not exist" containerID="6abf3c84b26ed9525cbc35fcc62e8ed7754a7fa33ead33bec1807d9dcf5eacf5" Feb 17 09:02:39 crc kubenswrapper[4813]: I0217 09:02:39.006620 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6abf3c84b26ed9525cbc35fcc62e8ed7754a7fa33ead33bec1807d9dcf5eacf5"} err="failed to get container status \"6abf3c84b26ed9525cbc35fcc62e8ed7754a7fa33ead33bec1807d9dcf5eacf5\": rpc error: code = NotFound desc = could not find container \"6abf3c84b26ed9525cbc35fcc62e8ed7754a7fa33ead33bec1807d9dcf5eacf5\": container with ID starting with 6abf3c84b26ed9525cbc35fcc62e8ed7754a7fa33ead33bec1807d9dcf5eacf5 not found: ID does not exist" Feb 17 09:02:39 crc kubenswrapper[4813]: I0217 09:02:39.011238 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:02:39 crc kubenswrapper[4813]: I0217 09:02:39.021937 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:02:39 crc kubenswrapper[4813]: I0217 09:02:39.123647 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f03c620-782b-43d2-b1d0-2eda3feb85e3" path="/var/lib/kubelet/pods/5f03c620-782b-43d2-b1d0-2eda3feb85e3/volumes" Feb 17 09:02:39 crc kubenswrapper[4813]: I0217 09:02:39.124981 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75bc150e-da7f-48ca-a481-c89057aaa371" path="/var/lib/kubelet/pods/75bc150e-da7f-48ca-a481-c89057aaa371/volumes" Feb 17 09:02:39 crc kubenswrapper[4813]: I0217 09:02:39.126585 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8469008e-d099-4b91-b304-645d0a759619" path="/var/lib/kubelet/pods/8469008e-d099-4b91-b304-645d0a759619/volumes" Feb 17 09:02:39 crc kubenswrapper[4813]: I0217 09:02:39.127915 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e12e1212-e6b3-4a1f-bd01-9983df669f5c" path="/var/lib/kubelet/pods/e12e1212-e6b3-4a1f-bd01-9983df669f5c/volumes" Feb 17 09:02:39 crc kubenswrapper[4813]: I0217 09:02:39.135765 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea46d7cb-8914-4419-baa6-c7cbeefad3af" path="/var/lib/kubelet/pods/ea46d7cb-8914-4419-baa6-c7cbeefad3af/volumes" Feb 17 09:02:39 crc kubenswrapper[4813]: I0217 09:02:39.974937 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b","Type":"ContainerStarted","Data":"3c9e5e1dac1b7b54bb6aa2b200069101094aebcd419416e991ec909f52f75ab3"} Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.385767 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-905e-account-create-update-vtdmc"] Feb 17 09:02:40 crc kubenswrapper[4813]: E0217 09:02:40.386111 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f03c620-782b-43d2-b1d0-2eda3feb85e3" containerName="watcher-applier" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.386134 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f03c620-782b-43d2-b1d0-2eda3feb85e3" containerName="watcher-applier" Feb 17 09:02:40 crc kubenswrapper[4813]: E0217 09:02:40.386157 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8469008e-d099-4b91-b304-645d0a759619" containerName="watcher-decision-engine" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.386164 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8469008e-d099-4b91-b304-645d0a759619" containerName="watcher-decision-engine" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.386356 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8469008e-d099-4b91-b304-645d0a759619" containerName="watcher-decision-engine" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.386386 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f03c620-782b-43d2-b1d0-2eda3feb85e3" containerName="watcher-applier" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.386897 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-905e-account-create-update-vtdmc" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.394356 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-gj4tb"] Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.394758 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.395573 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-gj4tb" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.403454 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-905e-account-create-update-vtdmc"] Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.411773 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wzfj\" (UniqueName: \"kubernetes.io/projected/c8d3bc13-7d2f-438d-9219-7b08d8390037-kube-api-access-2wzfj\") pod \"watcher-db-create-gj4tb\" (UID: \"c8d3bc13-7d2f-438d-9219-7b08d8390037\") " pod="watcher-kuttl-default/watcher-db-create-gj4tb" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.411858 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e847b9df-636c-485f-b51d-862673c13a58-operator-scripts\") pod \"watcher-905e-account-create-update-vtdmc\" (UID: \"e847b9df-636c-485f-b51d-862673c13a58\") " pod="watcher-kuttl-default/watcher-905e-account-create-update-vtdmc" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.411901 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d3bc13-7d2f-438d-9219-7b08d8390037-operator-scripts\") pod \"watcher-db-create-gj4tb\" (UID: \"c8d3bc13-7d2f-438d-9219-7b08d8390037\") " pod="watcher-kuttl-default/watcher-db-create-gj4tb" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.411967 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtx5b\" (UniqueName: \"kubernetes.io/projected/e847b9df-636c-485f-b51d-862673c13a58-kube-api-access-jtx5b\") pod \"watcher-905e-account-create-update-vtdmc\" (UID: \"e847b9df-636c-485f-b51d-862673c13a58\") " pod="watcher-kuttl-default/watcher-905e-account-create-update-vtdmc" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.427782 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-gj4tb"] Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.512400 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d3bc13-7d2f-438d-9219-7b08d8390037-operator-scripts\") pod \"watcher-db-create-gj4tb\" (UID: \"c8d3bc13-7d2f-438d-9219-7b08d8390037\") " pod="watcher-kuttl-default/watcher-db-create-gj4tb" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.512498 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtx5b\" (UniqueName: \"kubernetes.io/projected/e847b9df-636c-485f-b51d-862673c13a58-kube-api-access-jtx5b\") pod \"watcher-905e-account-create-update-vtdmc\" (UID: \"e847b9df-636c-485f-b51d-862673c13a58\") " pod="watcher-kuttl-default/watcher-905e-account-create-update-vtdmc" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.512532 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wzfj\" (UniqueName: \"kubernetes.io/projected/c8d3bc13-7d2f-438d-9219-7b08d8390037-kube-api-access-2wzfj\") pod \"watcher-db-create-gj4tb\" (UID: \"c8d3bc13-7d2f-438d-9219-7b08d8390037\") " pod="watcher-kuttl-default/watcher-db-create-gj4tb" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.512603 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e847b9df-636c-485f-b51d-862673c13a58-operator-scripts\") pod \"watcher-905e-account-create-update-vtdmc\" (UID: \"e847b9df-636c-485f-b51d-862673c13a58\") " pod="watcher-kuttl-default/watcher-905e-account-create-update-vtdmc" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.513059 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d3bc13-7d2f-438d-9219-7b08d8390037-operator-scripts\") pod \"watcher-db-create-gj4tb\" (UID: \"c8d3bc13-7d2f-438d-9219-7b08d8390037\") " pod="watcher-kuttl-default/watcher-db-create-gj4tb" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.513276 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e847b9df-636c-485f-b51d-862673c13a58-operator-scripts\") pod \"watcher-905e-account-create-update-vtdmc\" (UID: \"e847b9df-636c-485f-b51d-862673c13a58\") " pod="watcher-kuttl-default/watcher-905e-account-create-update-vtdmc" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.528085 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtx5b\" (UniqueName: \"kubernetes.io/projected/e847b9df-636c-485f-b51d-862673c13a58-kube-api-access-jtx5b\") pod \"watcher-905e-account-create-update-vtdmc\" (UID: \"e847b9df-636c-485f-b51d-862673c13a58\") " pod="watcher-kuttl-default/watcher-905e-account-create-update-vtdmc" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.528115 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wzfj\" (UniqueName: \"kubernetes.io/projected/c8d3bc13-7d2f-438d-9219-7b08d8390037-kube-api-access-2wzfj\") pod \"watcher-db-create-gj4tb\" (UID: \"c8d3bc13-7d2f-438d-9219-7b08d8390037\") " pod="watcher-kuttl-default/watcher-db-create-gj4tb" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.700180 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-905e-account-create-update-vtdmc" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.716850 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-gj4tb" Feb 17 09:02:40 crc kubenswrapper[4813]: I0217 09:02:40.990894 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b","Type":"ContainerStarted","Data":"de37a600b844daea3af4ddd92b585f1e34c431776704fe3d5e81db0aa9c6683b"} Feb 17 09:02:41 crc kubenswrapper[4813]: I0217 09:02:41.238387 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-gj4tb"] Feb 17 09:02:41 crc kubenswrapper[4813]: W0217 09:02:41.242288 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8d3bc13_7d2f_438d_9219_7b08d8390037.slice/crio-05bc074a6a683c62dd1cb52be3f21a6a4252d6144890b4982e4a0cad16211d84 WatchSource:0}: Error finding container 05bc074a6a683c62dd1cb52be3f21a6a4252d6144890b4982e4a0cad16211d84: Status 404 returned error can't find the container with id 05bc074a6a683c62dd1cb52be3f21a6a4252d6144890b4982e4a0cad16211d84 Feb 17 09:02:41 crc kubenswrapper[4813]: I0217 09:02:41.312136 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-905e-account-create-update-vtdmc"] Feb 17 09:02:41 crc kubenswrapper[4813]: W0217 09:02:41.322899 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode847b9df_636c_485f_b51d_862673c13a58.slice/crio-72b99c3ce4d1e909841b250937eeedb5c17cc3fc7df4ac4b35c9d475ab319360 WatchSource:0}: Error finding container 72b99c3ce4d1e909841b250937eeedb5c17cc3fc7df4ac4b35c9d475ab319360: Status 404 returned error can't find the container with id 72b99c3ce4d1e909841b250937eeedb5c17cc3fc7df4ac4b35c9d475ab319360 Feb 17 09:02:42 crc kubenswrapper[4813]: I0217 09:02:42.003725 4813 generic.go:334] "Generic (PLEG): container finished" podID="e847b9df-636c-485f-b51d-862673c13a58" containerID="66842e66f6758f0d04f1033c619d0108c4eb52ea3900d5881b0cddfa6f81d694" exitCode=0 Feb 17 09:02:42 crc kubenswrapper[4813]: I0217 09:02:42.003767 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-905e-account-create-update-vtdmc" event={"ID":"e847b9df-636c-485f-b51d-862673c13a58","Type":"ContainerDied","Data":"66842e66f6758f0d04f1033c619d0108c4eb52ea3900d5881b0cddfa6f81d694"} Feb 17 09:02:42 crc kubenswrapper[4813]: I0217 09:02:42.004154 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-905e-account-create-update-vtdmc" event={"ID":"e847b9df-636c-485f-b51d-862673c13a58","Type":"ContainerStarted","Data":"72b99c3ce4d1e909841b250937eeedb5c17cc3fc7df4ac4b35c9d475ab319360"} Feb 17 09:02:42 crc kubenswrapper[4813]: I0217 09:02:42.008687 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b","Type":"ContainerStarted","Data":"27facf81c254e2218a9f46753b278637733d16352f1f7e9a1f19b311e1617e1c"} Feb 17 09:02:42 crc kubenswrapper[4813]: I0217 09:02:42.008842 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:02:42 crc kubenswrapper[4813]: I0217 09:02:42.014266 4813 generic.go:334] "Generic (PLEG): container finished" podID="c8d3bc13-7d2f-438d-9219-7b08d8390037" containerID="3b35b8326d5924650137295748e1edcdd5cf96d0becaa2462775875bea40c42f" exitCode=0 Feb 17 09:02:42 crc kubenswrapper[4813]: I0217 09:02:42.014323 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-gj4tb" event={"ID":"c8d3bc13-7d2f-438d-9219-7b08d8390037","Type":"ContainerDied","Data":"3b35b8326d5924650137295748e1edcdd5cf96d0becaa2462775875bea40c42f"} Feb 17 09:02:42 crc kubenswrapper[4813]: I0217 09:02:42.014344 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-gj4tb" event={"ID":"c8d3bc13-7d2f-438d-9219-7b08d8390037","Type":"ContainerStarted","Data":"05bc074a6a683c62dd1cb52be3f21a6a4252d6144890b4982e4a0cad16211d84"} Feb 17 09:02:42 crc kubenswrapper[4813]: I0217 09:02:42.073192 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.736505669 podStartE2EDuration="6.073164689s" podCreationTimestamp="2026-02-17 09:02:36 +0000 UTC" firstStartedPulling="2026-02-17 09:02:37.911460296 +0000 UTC m=+1305.572221509" lastFinishedPulling="2026-02-17 09:02:41.248119306 +0000 UTC m=+1308.908880529" observedRunningTime="2026-02-17 09:02:42.067908709 +0000 UTC m=+1309.728669952" watchObservedRunningTime="2026-02-17 09:02:42.073164689 +0000 UTC m=+1309.733925922" Feb 17 09:02:43 crc kubenswrapper[4813]: I0217 09:02:43.466644 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-gj4tb" Feb 17 09:02:43 crc kubenswrapper[4813]: I0217 09:02:43.493325 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-905e-account-create-update-vtdmc" Feb 17 09:02:43 crc kubenswrapper[4813]: I0217 09:02:43.568981 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d3bc13-7d2f-438d-9219-7b08d8390037-operator-scripts\") pod \"c8d3bc13-7d2f-438d-9219-7b08d8390037\" (UID: \"c8d3bc13-7d2f-438d-9219-7b08d8390037\") " Feb 17 09:02:43 crc kubenswrapper[4813]: I0217 09:02:43.569057 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtx5b\" (UniqueName: \"kubernetes.io/projected/e847b9df-636c-485f-b51d-862673c13a58-kube-api-access-jtx5b\") pod \"e847b9df-636c-485f-b51d-862673c13a58\" (UID: \"e847b9df-636c-485f-b51d-862673c13a58\") " Feb 17 09:02:43 crc kubenswrapper[4813]: I0217 09:02:43.569117 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e847b9df-636c-485f-b51d-862673c13a58-operator-scripts\") pod \"e847b9df-636c-485f-b51d-862673c13a58\" (UID: \"e847b9df-636c-485f-b51d-862673c13a58\") " Feb 17 09:02:43 crc kubenswrapper[4813]: I0217 09:02:43.569154 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wzfj\" (UniqueName: \"kubernetes.io/projected/c8d3bc13-7d2f-438d-9219-7b08d8390037-kube-api-access-2wzfj\") pod \"c8d3bc13-7d2f-438d-9219-7b08d8390037\" (UID: \"c8d3bc13-7d2f-438d-9219-7b08d8390037\") " Feb 17 09:02:43 crc kubenswrapper[4813]: I0217 09:02:43.569748 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8d3bc13-7d2f-438d-9219-7b08d8390037-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8d3bc13-7d2f-438d-9219-7b08d8390037" (UID: "c8d3bc13-7d2f-438d-9219-7b08d8390037"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:02:43 crc kubenswrapper[4813]: I0217 09:02:43.570513 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e847b9df-636c-485f-b51d-862673c13a58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e847b9df-636c-485f-b51d-862673c13a58" (UID: "e847b9df-636c-485f-b51d-862673c13a58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:02:43 crc kubenswrapper[4813]: I0217 09:02:43.575781 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e847b9df-636c-485f-b51d-862673c13a58-kube-api-access-jtx5b" (OuterVolumeSpecName: "kube-api-access-jtx5b") pod "e847b9df-636c-485f-b51d-862673c13a58" (UID: "e847b9df-636c-485f-b51d-862673c13a58"). InnerVolumeSpecName "kube-api-access-jtx5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:02:43 crc kubenswrapper[4813]: I0217 09:02:43.576537 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8d3bc13-7d2f-438d-9219-7b08d8390037-kube-api-access-2wzfj" (OuterVolumeSpecName: "kube-api-access-2wzfj") pod "c8d3bc13-7d2f-438d-9219-7b08d8390037" (UID: "c8d3bc13-7d2f-438d-9219-7b08d8390037"). InnerVolumeSpecName "kube-api-access-2wzfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:02:43 crc kubenswrapper[4813]: I0217 09:02:43.671831 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtx5b\" (UniqueName: \"kubernetes.io/projected/e847b9df-636c-485f-b51d-862673c13a58-kube-api-access-jtx5b\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:43 crc kubenswrapper[4813]: I0217 09:02:43.671876 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e847b9df-636c-485f-b51d-862673c13a58-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:43 crc kubenswrapper[4813]: I0217 09:02:43.671896 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wzfj\" (UniqueName: \"kubernetes.io/projected/c8d3bc13-7d2f-438d-9219-7b08d8390037-kube-api-access-2wzfj\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:43 crc kubenswrapper[4813]: I0217 09:02:43.671915 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d3bc13-7d2f-438d-9219-7b08d8390037-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:44 crc kubenswrapper[4813]: I0217 09:02:44.031149 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-905e-account-create-update-vtdmc" Feb 17 09:02:44 crc kubenswrapper[4813]: I0217 09:02:44.031103 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-905e-account-create-update-vtdmc" event={"ID":"e847b9df-636c-485f-b51d-862673c13a58","Type":"ContainerDied","Data":"72b99c3ce4d1e909841b250937eeedb5c17cc3fc7df4ac4b35c9d475ab319360"} Feb 17 09:02:44 crc kubenswrapper[4813]: I0217 09:02:44.031420 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72b99c3ce4d1e909841b250937eeedb5c17cc3fc7df4ac4b35c9d475ab319360" Feb 17 09:02:44 crc kubenswrapper[4813]: I0217 09:02:44.032625 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-gj4tb" event={"ID":"c8d3bc13-7d2f-438d-9219-7b08d8390037","Type":"ContainerDied","Data":"05bc074a6a683c62dd1cb52be3f21a6a4252d6144890b4982e4a0cad16211d84"} Feb 17 09:02:44 crc kubenswrapper[4813]: I0217 09:02:44.032648 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05bc074a6a683c62dd1cb52be3f21a6a4252d6144890b4982e4a0cad16211d84" Feb 17 09:02:44 crc kubenswrapper[4813]: I0217 09:02:44.032676 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-gj4tb" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.682432 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jjldp"] Feb 17 09:02:45 crc kubenswrapper[4813]: E0217 09:02:45.683148 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e847b9df-636c-485f-b51d-862673c13a58" containerName="mariadb-account-create-update" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.683166 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e847b9df-636c-485f-b51d-862673c13a58" containerName="mariadb-account-create-update" Feb 17 09:02:45 crc kubenswrapper[4813]: E0217 09:02:45.683184 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d3bc13-7d2f-438d-9219-7b08d8390037" containerName="mariadb-database-create" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.683192 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d3bc13-7d2f-438d-9219-7b08d8390037" containerName="mariadb-database-create" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.683389 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e847b9df-636c-485f-b51d-862673c13a58" containerName="mariadb-account-create-update" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.683422 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d3bc13-7d2f-438d-9219-7b08d8390037" containerName="mariadb-database-create" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.684031 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.685874 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.686357 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-d7kqr" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.698327 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jjldp"] Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.703934 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-jjldp\" (UID: \"c59139ad-43cf-4589-84b3-652e2453b3ba\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.704029 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-db-sync-config-data\") pod \"watcher-kuttl-db-sync-jjldp\" (UID: \"c59139ad-43cf-4589-84b3-652e2453b3ba\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.704062 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-config-data\") pod \"watcher-kuttl-db-sync-jjldp\" (UID: \"c59139ad-43cf-4589-84b3-652e2453b3ba\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.704095 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl89h\" (UniqueName: \"kubernetes.io/projected/c59139ad-43cf-4589-84b3-652e2453b3ba-kube-api-access-zl89h\") pod \"watcher-kuttl-db-sync-jjldp\" (UID: \"c59139ad-43cf-4589-84b3-652e2453b3ba\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.805932 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-jjldp\" (UID: \"c59139ad-43cf-4589-84b3-652e2453b3ba\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.806108 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-db-sync-config-data\") pod \"watcher-kuttl-db-sync-jjldp\" (UID: \"c59139ad-43cf-4589-84b3-652e2453b3ba\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.806173 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-config-data\") pod \"watcher-kuttl-db-sync-jjldp\" (UID: \"c59139ad-43cf-4589-84b3-652e2453b3ba\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.806232 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl89h\" (UniqueName: \"kubernetes.io/projected/c59139ad-43cf-4589-84b3-652e2453b3ba-kube-api-access-zl89h\") pod \"watcher-kuttl-db-sync-jjldp\" (UID: \"c59139ad-43cf-4589-84b3-652e2453b3ba\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.810698 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-config-data\") pod \"watcher-kuttl-db-sync-jjldp\" (UID: \"c59139ad-43cf-4589-84b3-652e2453b3ba\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.810874 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-jjldp\" (UID: \"c59139ad-43cf-4589-84b3-652e2453b3ba\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.812369 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-db-sync-config-data\") pod \"watcher-kuttl-db-sync-jjldp\" (UID: \"c59139ad-43cf-4589-84b3-652e2453b3ba\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" Feb 17 09:02:45 crc kubenswrapper[4813]: I0217 09:02:45.825408 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl89h\" (UniqueName: \"kubernetes.io/projected/c59139ad-43cf-4589-84b3-652e2453b3ba-kube-api-access-zl89h\") pod \"watcher-kuttl-db-sync-jjldp\" (UID: \"c59139ad-43cf-4589-84b3-652e2453b3ba\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" Feb 17 09:02:46 crc kubenswrapper[4813]: I0217 09:02:46.004923 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" Feb 17 09:02:46 crc kubenswrapper[4813]: I0217 09:02:46.554057 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jjldp"] Feb 17 09:02:47 crc kubenswrapper[4813]: I0217 09:02:47.061159 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" event={"ID":"c59139ad-43cf-4589-84b3-652e2453b3ba","Type":"ContainerStarted","Data":"f70ad92c893a59fed058461e103f14572a962ec5580c8cfe9df5a95697ecf0e7"} Feb 17 09:02:47 crc kubenswrapper[4813]: I0217 09:02:47.061521 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" event={"ID":"c59139ad-43cf-4589-84b3-652e2453b3ba","Type":"ContainerStarted","Data":"aac07ca218b94f82590ac2bdc668ead0aa458498e6a3e64a880586612a62c2f5"} Feb 17 09:02:47 crc kubenswrapper[4813]: I0217 09:02:47.078538 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" podStartSLOduration=2.078520953 podStartE2EDuration="2.078520953s" podCreationTimestamp="2026-02-17 09:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:02:47.075641131 +0000 UTC m=+1314.736402354" watchObservedRunningTime="2026-02-17 09:02:47.078520953 +0000 UTC m=+1314.739282176" Feb 17 09:02:49 crc kubenswrapper[4813]: I0217 09:02:49.077780 4813 generic.go:334] "Generic (PLEG): container finished" podID="c59139ad-43cf-4589-84b3-652e2453b3ba" containerID="f70ad92c893a59fed058461e103f14572a962ec5580c8cfe9df5a95697ecf0e7" exitCode=0 Feb 17 09:02:49 crc kubenswrapper[4813]: I0217 09:02:49.077972 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" event={"ID":"c59139ad-43cf-4589-84b3-652e2453b3ba","Type":"ContainerDied","Data":"f70ad92c893a59fed058461e103f14572a962ec5580c8cfe9df5a95697ecf0e7"} Feb 17 09:02:50 crc kubenswrapper[4813]: I0217 09:02:50.432397 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" Feb 17 09:02:50 crc kubenswrapper[4813]: I0217 09:02:50.575171 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl89h\" (UniqueName: \"kubernetes.io/projected/c59139ad-43cf-4589-84b3-652e2453b3ba-kube-api-access-zl89h\") pod \"c59139ad-43cf-4589-84b3-652e2453b3ba\" (UID: \"c59139ad-43cf-4589-84b3-652e2453b3ba\") " Feb 17 09:02:50 crc kubenswrapper[4813]: I0217 09:02:50.575547 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-db-sync-config-data\") pod \"c59139ad-43cf-4589-84b3-652e2453b3ba\" (UID: \"c59139ad-43cf-4589-84b3-652e2453b3ba\") " Feb 17 09:02:50 crc kubenswrapper[4813]: I0217 09:02:50.575914 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-config-data\") pod \"c59139ad-43cf-4589-84b3-652e2453b3ba\" (UID: \"c59139ad-43cf-4589-84b3-652e2453b3ba\") " Feb 17 09:02:50 crc kubenswrapper[4813]: I0217 09:02:50.575995 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-combined-ca-bundle\") pod \"c59139ad-43cf-4589-84b3-652e2453b3ba\" (UID: \"c59139ad-43cf-4589-84b3-652e2453b3ba\") " Feb 17 09:02:50 crc kubenswrapper[4813]: I0217 09:02:50.582895 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c59139ad-43cf-4589-84b3-652e2453b3ba" (UID: "c59139ad-43cf-4589-84b3-652e2453b3ba"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:50 crc kubenswrapper[4813]: I0217 09:02:50.583633 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c59139ad-43cf-4589-84b3-652e2453b3ba-kube-api-access-zl89h" (OuterVolumeSpecName: "kube-api-access-zl89h") pod "c59139ad-43cf-4589-84b3-652e2453b3ba" (UID: "c59139ad-43cf-4589-84b3-652e2453b3ba"). InnerVolumeSpecName "kube-api-access-zl89h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:02:50 crc kubenswrapper[4813]: I0217 09:02:50.608496 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c59139ad-43cf-4589-84b3-652e2453b3ba" (UID: "c59139ad-43cf-4589-84b3-652e2453b3ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:50 crc kubenswrapper[4813]: I0217 09:02:50.632077 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-config-data" (OuterVolumeSpecName: "config-data") pod "c59139ad-43cf-4589-84b3-652e2453b3ba" (UID: "c59139ad-43cf-4589-84b3-652e2453b3ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:02:50 crc kubenswrapper[4813]: I0217 09:02:50.678401 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:50 crc kubenswrapper[4813]: I0217 09:02:50.678433 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:50 crc kubenswrapper[4813]: I0217 09:02:50.678442 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59139ad-43cf-4589-84b3-652e2453b3ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:50 crc kubenswrapper[4813]: I0217 09:02:50.678452 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl89h\" (UniqueName: \"kubernetes.io/projected/c59139ad-43cf-4589-84b3-652e2453b3ba-kube-api-access-zl89h\") on node \"crc\" DevicePath \"\"" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.121137 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.144221 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jjldp" event={"ID":"c59139ad-43cf-4589-84b3-652e2453b3ba","Type":"ContainerDied","Data":"aac07ca218b94f82590ac2bdc668ead0aa458498e6a3e64a880586612a62c2f5"} Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.144263 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aac07ca218b94f82590ac2bdc668ead0aa458498e6a3e64a880586612a62c2f5" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.374904 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:02:51 crc kubenswrapper[4813]: E0217 09:02:51.375278 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59139ad-43cf-4589-84b3-652e2453b3ba" containerName="watcher-kuttl-db-sync" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.375300 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59139ad-43cf-4589-84b3-652e2453b3ba" containerName="watcher-kuttl-db-sync" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.375577 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59139ad-43cf-4589-84b3-652e2453b3ba" containerName="watcher-kuttl-db-sync" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.376201 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.378799 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.379141 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-d7kqr" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.401525 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.456267 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.457233 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.458778 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.471499 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.489974 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.490073 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/626d563a-6a24-4038-9ef0-aa109f1edbba-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.490097 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnwzs\" (UniqueName: \"kubernetes.io/projected/626d563a-6a24-4038-9ef0-aa109f1edbba-kube-api-access-pnwzs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.490150 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.490173 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.523750 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.526392 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.533112 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.535095 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.591256 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.591300 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.591348 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwsqt\" (UniqueName: \"kubernetes.io/projected/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-kube-api-access-kwsqt\") pod \"watcher-kuttl-applier-0\" (UID: \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.591372 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.591641 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.591691 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.591746 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/626d563a-6a24-4038-9ef0-aa109f1edbba-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.591775 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnwzs\" (UniqueName: \"kubernetes.io/projected/626d563a-6a24-4038-9ef0-aa109f1edbba-kube-api-access-pnwzs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.591813 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.592324 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/626d563a-6a24-4038-9ef0-aa109f1edbba-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.594370 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.595438 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.604838 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.616005 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnwzs\" (UniqueName: \"kubernetes.io/projected/626d563a-6a24-4038-9ef0-aa109f1edbba-kube-api-access-pnwzs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.693066 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.693451 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fftfn\" (UniqueName: \"kubernetes.io/projected/53e60460-602e-4976-ba9b-cd36c2dcc673-kube-api-access-fftfn\") pod \"watcher-kuttl-api-0\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.693585 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.694191 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwsqt\" (UniqueName: \"kubernetes.io/projected/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-kube-api-access-kwsqt\") pod \"watcher-kuttl-applier-0\" (UID: \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.694356 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e60460-602e-4976-ba9b-cd36c2dcc673-logs\") pod \"watcher-kuttl-api-0\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.694536 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.694647 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.694437 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.695706 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.695426 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.695915 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.697913 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.698201 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.714450 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwsqt\" (UniqueName: \"kubernetes.io/projected/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-kube-api-access-kwsqt\") pod \"watcher-kuttl-applier-0\" (UID: \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.773261 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.799087 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.799156 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.799180 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fftfn\" (UniqueName: \"kubernetes.io/projected/53e60460-602e-4976-ba9b-cd36c2dcc673-kube-api-access-fftfn\") pod \"watcher-kuttl-api-0\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.799225 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e60460-602e-4976-ba9b-cd36c2dcc673-logs\") pod \"watcher-kuttl-api-0\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.799252 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.800140 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e60460-602e-4976-ba9b-cd36c2dcc673-logs\") pod \"watcher-kuttl-api-0\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.808249 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.818519 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.820178 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.822143 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fftfn\" (UniqueName: \"kubernetes.io/projected/53e60460-602e-4976-ba9b-cd36c2dcc673-kube-api-access-fftfn\") pod \"watcher-kuttl-api-0\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:51 crc kubenswrapper[4813]: I0217 09:02:51.842822 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:52 crc kubenswrapper[4813]: W0217 09:02:52.223466 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc0e3ff0_0206_476b_8c76_0bba9ae5e484.slice/crio-2e5b806b0ac4b2b746ec9ca74ff7889e361f7b4edd20142269b56bccdad75cde WatchSource:0}: Error finding container 2e5b806b0ac4b2b746ec9ca74ff7889e361f7b4edd20142269b56bccdad75cde: Status 404 returned error can't find the container with id 2e5b806b0ac4b2b746ec9ca74ff7889e361f7b4edd20142269b56bccdad75cde Feb 17 09:02:52 crc kubenswrapper[4813]: I0217 09:02:52.223489 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:02:52 crc kubenswrapper[4813]: I0217 09:02:52.281176 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:02:52 crc kubenswrapper[4813]: W0217 09:02:52.285262 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod626d563a_6a24_4038_9ef0_aa109f1edbba.slice/crio-14f79ac056602cac60bcc86468f3227f9b73877213a187a47678ade6df997f9f WatchSource:0}: Error finding container 14f79ac056602cac60bcc86468f3227f9b73877213a187a47678ade6df997f9f: Status 404 returned error can't find the container with id 14f79ac056602cac60bcc86468f3227f9b73877213a187a47678ade6df997f9f Feb 17 09:02:52 crc kubenswrapper[4813]: I0217 09:02:52.379718 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:02:52 crc kubenswrapper[4813]: W0217 09:02:52.389094 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53e60460_602e_4976_ba9b_cd36c2dcc673.slice/crio-59a9acb1a25983fde31f51f59441026f09028c99919121817c42697dc04158be WatchSource:0}: Error finding container 59a9acb1a25983fde31f51f59441026f09028c99919121817c42697dc04158be: Status 404 returned error can't find the container with id 59a9acb1a25983fde31f51f59441026f09028c99919121817c42697dc04158be Feb 17 09:02:53 crc kubenswrapper[4813]: I0217 09:02:53.138279 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"626d563a-6a24-4038-9ef0-aa109f1edbba","Type":"ContainerStarted","Data":"e6fe251ca4dc5b6e9c25c820b1e7acb8e35fef18b264e53b83fd275a695d1862"} Feb 17 09:02:53 crc kubenswrapper[4813]: I0217 09:02:53.138584 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"626d563a-6a24-4038-9ef0-aa109f1edbba","Type":"ContainerStarted","Data":"14f79ac056602cac60bcc86468f3227f9b73877213a187a47678ade6df997f9f"} Feb 17 09:02:53 crc kubenswrapper[4813]: I0217 09:02:53.140721 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"53e60460-602e-4976-ba9b-cd36c2dcc673","Type":"ContainerStarted","Data":"0d065f870e0b4b37e3c1cb7fe4ecb3ffe1b3f75a0054248ecd42dc3928febc35"} Feb 17 09:02:53 crc kubenswrapper[4813]: I0217 09:02:53.140760 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"53e60460-602e-4976-ba9b-cd36c2dcc673","Type":"ContainerStarted","Data":"b3b3bb803a666c3ddba7766b82bf2951f876f7a672202a9d113518522b8b5549"} Feb 17 09:02:53 crc kubenswrapper[4813]: I0217 09:02:53.140771 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"53e60460-602e-4976-ba9b-cd36c2dcc673","Type":"ContainerStarted","Data":"59a9acb1a25983fde31f51f59441026f09028c99919121817c42697dc04158be"} Feb 17 09:02:53 crc kubenswrapper[4813]: I0217 09:02:53.142350 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:53 crc kubenswrapper[4813]: I0217 09:02:53.150649 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"dc0e3ff0-0206-476b-8c76-0bba9ae5e484","Type":"ContainerStarted","Data":"9fce49d514432d5183bc04291dfddedb2261b05742cd9a65a2872212658d5fe9"} Feb 17 09:02:53 crc kubenswrapper[4813]: I0217 09:02:53.150687 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"dc0e3ff0-0206-476b-8c76-0bba9ae5e484","Type":"ContainerStarted","Data":"2e5b806b0ac4b2b746ec9ca74ff7889e361f7b4edd20142269b56bccdad75cde"} Feb 17 09:02:53 crc kubenswrapper[4813]: I0217 09:02:53.186459 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.186440266 podStartE2EDuration="2.186440266s" podCreationTimestamp="2026-02-17 09:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:02:53.176456072 +0000 UTC m=+1320.837217295" watchObservedRunningTime="2026-02-17 09:02:53.186440266 +0000 UTC m=+1320.847201489" Feb 17 09:02:53 crc kubenswrapper[4813]: I0217 09:02:53.210378 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.210358557 podStartE2EDuration="2.210358557s" podCreationTimestamp="2026-02-17 09:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:02:53.204009936 +0000 UTC m=+1320.864771159" watchObservedRunningTime="2026-02-17 09:02:53.210358557 +0000 UTC m=+1320.871119780" Feb 17 09:02:53 crc kubenswrapper[4813]: I0217 09:02:53.226208 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.226189717 podStartE2EDuration="2.226189717s" podCreationTimestamp="2026-02-17 09:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:02:53.222328147 +0000 UTC m=+1320.883089380" watchObservedRunningTime="2026-02-17 09:02:53.226189717 +0000 UTC m=+1320.886950960" Feb 17 09:02:55 crc kubenswrapper[4813]: I0217 09:02:55.165118 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:02:56 crc kubenswrapper[4813]: I0217 09:02:56.774179 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:02:56 crc kubenswrapper[4813]: I0217 09:02:56.844061 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:01 crc kubenswrapper[4813]: I0217 09:03:01.695115 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:01 crc kubenswrapper[4813]: I0217 09:03:01.718326 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:01 crc kubenswrapper[4813]: I0217 09:03:01.775612 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:01 crc kubenswrapper[4813]: I0217 09:03:01.801604 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:01 crc kubenswrapper[4813]: I0217 09:03:01.843913 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:01 crc kubenswrapper[4813]: I0217 09:03:01.847920 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:02 crc kubenswrapper[4813]: I0217 09:03:02.258720 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:02 crc kubenswrapper[4813]: I0217 09:03:02.290249 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:02 crc kubenswrapper[4813]: I0217 09:03:02.291858 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:02 crc kubenswrapper[4813]: I0217 09:03:02.304448 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.386191 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.387343 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="ceilometer-central-agent" containerID="cri-o://63ab09f59a9c4114e5bc8f2bef993ffed37d76807dec985a163960a1e7473978" gracePeriod=30 Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.387563 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="proxy-httpd" containerID="cri-o://27facf81c254e2218a9f46753b278637733d16352f1f7e9a1f19b311e1617e1c" gracePeriod=30 Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.387669 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="sg-core" containerID="cri-o://de37a600b844daea3af4ddd92b585f1e34c431776704fe3d5e81db0aa9c6683b" gracePeriod=30 Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.387776 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="ceilometer-notification-agent" containerID="cri-o://3c9e5e1dac1b7b54bb6aa2b200069101094aebcd419416e991ec909f52f75ab3" gracePeriod=30 Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.466391 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jjldp"] Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.478824 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jjldp"] Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.487198 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher905e-account-delete-sv6j6"] Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.488163 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher905e-account-delete-sv6j6" Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.506155 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher905e-account-delete-sv6j6"] Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.550971 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.551791 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="dc0e3ff0-0206-476b-8c76-0bba9ae5e484" containerName="watcher-applier" containerID="cri-o://9fce49d514432d5183bc04291dfddedb2261b05742cd9a65a2872212658d5fe9" gracePeriod=30 Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.611811 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.612019 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="53e60460-602e-4976-ba9b-cd36c2dcc673" containerName="watcher-kuttl-api-log" containerID="cri-o://b3b3bb803a666c3ddba7766b82bf2951f876f7a672202a9d113518522b8b5549" gracePeriod=30 Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.617334 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="53e60460-602e-4976-ba9b-cd36c2dcc673" containerName="watcher-api" containerID="cri-o://0d065f870e0b4b37e3c1cb7fe4ecb3ffe1b3f75a0054248ecd42dc3928febc35" gracePeriod=30 Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.619940 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrvjr\" (UniqueName: \"kubernetes.io/projected/f2059184-9627-4ec4-a119-46525b0239a0-kube-api-access-nrvjr\") pod \"watcher905e-account-delete-sv6j6\" (UID: \"f2059184-9627-4ec4-a119-46525b0239a0\") " pod="watcher-kuttl-default/watcher905e-account-delete-sv6j6" Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.622597 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2059184-9627-4ec4-a119-46525b0239a0-operator-scripts\") pod \"watcher905e-account-delete-sv6j6\" (UID: \"f2059184-9627-4ec4-a119-46525b0239a0\") " pod="watcher-kuttl-default/watcher905e-account-delete-sv6j6" Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.633959 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:03:04 crc kubenswrapper[4813]: E0217 09:03:04.654602 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b0af28b_1f27_4bc5_a3be_719d6be5cd9b.slice/crio-conmon-de37a600b844daea3af4ddd92b585f1e34c431776704fe3d5e81db0aa9c6683b.scope\": RecentStats: unable to find data in memory cache]" Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.724813 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2059184-9627-4ec4-a119-46525b0239a0-operator-scripts\") pod \"watcher905e-account-delete-sv6j6\" (UID: \"f2059184-9627-4ec4-a119-46525b0239a0\") " pod="watcher-kuttl-default/watcher905e-account-delete-sv6j6" Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.724917 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrvjr\" (UniqueName: \"kubernetes.io/projected/f2059184-9627-4ec4-a119-46525b0239a0-kube-api-access-nrvjr\") pod \"watcher905e-account-delete-sv6j6\" (UID: \"f2059184-9627-4ec4-a119-46525b0239a0\") " pod="watcher-kuttl-default/watcher905e-account-delete-sv6j6" Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.726208 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2059184-9627-4ec4-a119-46525b0239a0-operator-scripts\") pod \"watcher905e-account-delete-sv6j6\" (UID: \"f2059184-9627-4ec4-a119-46525b0239a0\") " pod="watcher-kuttl-default/watcher905e-account-delete-sv6j6" Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.757213 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrvjr\" (UniqueName: \"kubernetes.io/projected/f2059184-9627-4ec4-a119-46525b0239a0-kube-api-access-nrvjr\") pod \"watcher905e-account-delete-sv6j6\" (UID: \"f2059184-9627-4ec4-a119-46525b0239a0\") " pod="watcher-kuttl-default/watcher905e-account-delete-sv6j6" Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.770002 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.152:3000/\": read tcp 10.217.0.2:40638->10.217.0.152:3000: read: connection reset by peer" Feb 17 09:03:04 crc kubenswrapper[4813]: I0217 09:03:04.923711 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher905e-account-delete-sv6j6" Feb 17 09:03:05 crc kubenswrapper[4813]: I0217 09:03:05.131961 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c59139ad-43cf-4589-84b3-652e2453b3ba" path="/var/lib/kubelet/pods/c59139ad-43cf-4589-84b3-652e2453b3ba/volumes" Feb 17 09:03:05 crc kubenswrapper[4813]: I0217 09:03:05.166890 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 09:03:05 crc kubenswrapper[4813]: I0217 09:03:05.166938 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 09:03:05 crc kubenswrapper[4813]: I0217 09:03:05.166979 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 09:03:05 crc kubenswrapper[4813]: I0217 09:03:05.167577 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8f1831aa866d7234a4f9752273e3fc18af2abeede207be480bb974f39d90c1d"} pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 09:03:05 crc kubenswrapper[4813]: I0217 09:03:05.167623 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" containerID="cri-o://e8f1831aa866d7234a4f9752273e3fc18af2abeede207be480bb974f39d90c1d" gracePeriod=600 Feb 17 09:03:05 crc kubenswrapper[4813]: I0217 09:03:05.309679 4813 generic.go:334] "Generic (PLEG): container finished" podID="53e60460-602e-4976-ba9b-cd36c2dcc673" containerID="b3b3bb803a666c3ddba7766b82bf2951f876f7a672202a9d113518522b8b5549" exitCode=143 Feb 17 09:03:05 crc kubenswrapper[4813]: I0217 09:03:05.309779 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"53e60460-602e-4976-ba9b-cd36c2dcc673","Type":"ContainerDied","Data":"b3b3bb803a666c3ddba7766b82bf2951f876f7a672202a9d113518522b8b5549"} Feb 17 09:03:05 crc kubenswrapper[4813]: I0217 09:03:05.355618 4813 generic.go:334] "Generic (PLEG): container finished" podID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerID="27facf81c254e2218a9f46753b278637733d16352f1f7e9a1f19b311e1617e1c" exitCode=0 Feb 17 09:03:05 crc kubenswrapper[4813]: I0217 09:03:05.355655 4813 generic.go:334] "Generic (PLEG): container finished" podID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerID="de37a600b844daea3af4ddd92b585f1e34c431776704fe3d5e81db0aa9c6683b" exitCode=2 Feb 17 09:03:05 crc kubenswrapper[4813]: I0217 09:03:05.355664 4813 generic.go:334] "Generic (PLEG): container finished" podID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerID="63ab09f59a9c4114e5bc8f2bef993ffed37d76807dec985a163960a1e7473978" exitCode=0 Feb 17 09:03:05 crc kubenswrapper[4813]: I0217 09:03:05.355840 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="626d563a-6a24-4038-9ef0-aa109f1edbba" containerName="watcher-decision-engine" containerID="cri-o://e6fe251ca4dc5b6e9c25c820b1e7acb8e35fef18b264e53b83fd275a695d1862" gracePeriod=30 Feb 17 09:03:05 crc kubenswrapper[4813]: I0217 09:03:05.356099 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b","Type":"ContainerDied","Data":"27facf81c254e2218a9f46753b278637733d16352f1f7e9a1f19b311e1617e1c"} Feb 17 09:03:05 crc kubenswrapper[4813]: I0217 09:03:05.356125 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b","Type":"ContainerDied","Data":"de37a600b844daea3af4ddd92b585f1e34c431776704fe3d5e81db0aa9c6683b"} Feb 17 09:03:05 crc kubenswrapper[4813]: I0217 09:03:05.356134 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b","Type":"ContainerDied","Data":"63ab09f59a9c4114e5bc8f2bef993ffed37d76807dec985a163960a1e7473978"} Feb 17 09:03:05 crc kubenswrapper[4813]: I0217 09:03:05.491805 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher905e-account-delete-sv6j6"] Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.048995 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.055919 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.146847 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-custom-prometheus-ca\") pod \"53e60460-602e-4976-ba9b-cd36c2dcc673\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.146927 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fftfn\" (UniqueName: \"kubernetes.io/projected/53e60460-602e-4976-ba9b-cd36c2dcc673-kube-api-access-fftfn\") pod \"53e60460-602e-4976-ba9b-cd36c2dcc673\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.146986 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-combined-ca-bundle\") pod \"53e60460-602e-4976-ba9b-cd36c2dcc673\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.147134 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-config-data\") pod \"53e60460-602e-4976-ba9b-cd36c2dcc673\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.147191 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e60460-602e-4976-ba9b-cd36c2dcc673-logs\") pod \"53e60460-602e-4976-ba9b-cd36c2dcc673\" (UID: \"53e60460-602e-4976-ba9b-cd36c2dcc673\") " Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.148018 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e60460-602e-4976-ba9b-cd36c2dcc673-logs" (OuterVolumeSpecName: "logs") pod "53e60460-602e-4976-ba9b-cd36c2dcc673" (UID: "53e60460-602e-4976-ba9b-cd36c2dcc673"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.153139 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e60460-602e-4976-ba9b-cd36c2dcc673-kube-api-access-fftfn" (OuterVolumeSpecName: "kube-api-access-fftfn") pod "53e60460-602e-4976-ba9b-cd36c2dcc673" (UID: "53e60460-602e-4976-ba9b-cd36c2dcc673"). InnerVolumeSpecName "kube-api-access-fftfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.174599 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "53e60460-602e-4976-ba9b-cd36c2dcc673" (UID: "53e60460-602e-4976-ba9b-cd36c2dcc673"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.175127 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53e60460-602e-4976-ba9b-cd36c2dcc673" (UID: "53e60460-602e-4976-ba9b-cd36c2dcc673"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.194531 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-config-data" (OuterVolumeSpecName: "config-data") pod "53e60460-602e-4976-ba9b-cd36c2dcc673" (UID: "53e60460-602e-4976-ba9b-cd36c2dcc673"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.248419 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-scripts\") pod \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.248862 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-config-data\") pod \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.248996 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-run-httpd\") pod \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.249214 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-log-httpd\") pod \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.249360 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" (UID: "2b0af28b-1f27-4bc5-a3be-719d6be5cd9b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.249583 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f79tz\" (UniqueName: \"kubernetes.io/projected/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-kube-api-access-f79tz\") pod \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.249768 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-ceilometer-tls-certs\") pod \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.250759 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-sg-core-conf-yaml\") pod \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.249686 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" (UID: "2b0af28b-1f27-4bc5-a3be-719d6be5cd9b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.250899 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-scripts" (OuterVolumeSpecName: "scripts") pod "2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" (UID: "2b0af28b-1f27-4bc5-a3be-719d6be5cd9b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.250917 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-combined-ca-bundle\") pod \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\" (UID: \"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b\") " Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.251620 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.251637 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.251647 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e60460-602e-4976-ba9b-cd36c2dcc673-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.251656 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.251665 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fftfn\" (UniqueName: \"kubernetes.io/projected/53e60460-602e-4976-ba9b-cd36c2dcc673-kube-api-access-fftfn\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.251674 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.251682 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.251690 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e60460-602e-4976-ba9b-cd36c2dcc673-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.252886 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-kube-api-access-f79tz" (OuterVolumeSpecName: "kube-api-access-f79tz") pod "2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" (UID: "2b0af28b-1f27-4bc5-a3be-719d6be5cd9b"). InnerVolumeSpecName "kube-api-access-f79tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.283632 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" (UID: "2b0af28b-1f27-4bc5-a3be-719d6be5cd9b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.321933 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-config-data" (OuterVolumeSpecName: "config-data") pod "2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" (UID: "2b0af28b-1f27-4bc5-a3be-719d6be5cd9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.325531 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" (UID: "2b0af28b-1f27-4bc5-a3be-719d6be5cd9b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.337104 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" (UID: "2b0af28b-1f27-4bc5-a3be-719d6be5cd9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.353034 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.353068 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f79tz\" (UniqueName: \"kubernetes.io/projected/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-kube-api-access-f79tz\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.353079 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.353087 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.353096 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.364995 4813 generic.go:334] "Generic (PLEG): container finished" podID="53e60460-602e-4976-ba9b-cd36c2dcc673" containerID="0d065f870e0b4b37e3c1cb7fe4ecb3ffe1b3f75a0054248ecd42dc3928febc35" exitCode=0 Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.365091 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.366602 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"53e60460-602e-4976-ba9b-cd36c2dcc673","Type":"ContainerDied","Data":"0d065f870e0b4b37e3c1cb7fe4ecb3ffe1b3f75a0054248ecd42dc3928febc35"} Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.366643 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"53e60460-602e-4976-ba9b-cd36c2dcc673","Type":"ContainerDied","Data":"59a9acb1a25983fde31f51f59441026f09028c99919121817c42697dc04158be"} Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.366659 4813 scope.go:117] "RemoveContainer" containerID="0d065f870e0b4b37e3c1cb7fe4ecb3ffe1b3f75a0054248ecd42dc3928febc35" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.369124 4813 generic.go:334] "Generic (PLEG): container finished" podID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerID="3c9e5e1dac1b7b54bb6aa2b200069101094aebcd419416e991ec909f52f75ab3" exitCode=0 Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.369168 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b","Type":"ContainerDied","Data":"3c9e5e1dac1b7b54bb6aa2b200069101094aebcd419416e991ec909f52f75ab3"} Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.369186 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2b0af28b-1f27-4bc5-a3be-719d6be5cd9b","Type":"ContainerDied","Data":"c4f4928d1919e60f199dfd5de4fe2b17a7800f7a4bf2e67bb40e8cd03a94b86f"} Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.369240 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.387081 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2059184-9627-4ec4-a119-46525b0239a0" containerID="9d9b14557521d459bb2291fee6080d1db6376cd255dbe0f2f3dbe9e28d1703c3" exitCode=0 Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.387268 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher905e-account-delete-sv6j6" event={"ID":"f2059184-9627-4ec4-a119-46525b0239a0","Type":"ContainerDied","Data":"9d9b14557521d459bb2291fee6080d1db6376cd255dbe0f2f3dbe9e28d1703c3"} Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.387296 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher905e-account-delete-sv6j6" event={"ID":"f2059184-9627-4ec4-a119-46525b0239a0","Type":"ContainerStarted","Data":"f3dd654ea54d740605ffeaf1a08a7c6d1e60337b6d2c740b94ea98cabad2d36f"} Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.401439 4813 scope.go:117] "RemoveContainer" containerID="b3b3bb803a666c3ddba7766b82bf2951f876f7a672202a9d113518522b8b5549" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.401928 4813 generic.go:334] "Generic (PLEG): container finished" podID="3a6ba827-b08b-4163-b067-d9adb119398d" containerID="e8f1831aa866d7234a4f9752273e3fc18af2abeede207be480bb974f39d90c1d" exitCode=0 Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.401970 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerDied","Data":"e8f1831aa866d7234a4f9752273e3fc18af2abeede207be480bb974f39d90c1d"} Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.401999 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerStarted","Data":"4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58"} Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.403907 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.414442 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.417815 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.423104 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.425501 4813 scope.go:117] "RemoveContainer" containerID="0d065f870e0b4b37e3c1cb7fe4ecb3ffe1b3f75a0054248ecd42dc3928febc35" Feb 17 09:03:06 crc kubenswrapper[4813]: E0217 09:03:06.426689 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d065f870e0b4b37e3c1cb7fe4ecb3ffe1b3f75a0054248ecd42dc3928febc35\": container with ID starting with 0d065f870e0b4b37e3c1cb7fe4ecb3ffe1b3f75a0054248ecd42dc3928febc35 not found: ID does not exist" containerID="0d065f870e0b4b37e3c1cb7fe4ecb3ffe1b3f75a0054248ecd42dc3928febc35" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.426734 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d065f870e0b4b37e3c1cb7fe4ecb3ffe1b3f75a0054248ecd42dc3928febc35"} err="failed to get container status \"0d065f870e0b4b37e3c1cb7fe4ecb3ffe1b3f75a0054248ecd42dc3928febc35\": rpc error: code = NotFound desc = could not find container \"0d065f870e0b4b37e3c1cb7fe4ecb3ffe1b3f75a0054248ecd42dc3928febc35\": container with ID starting with 0d065f870e0b4b37e3c1cb7fe4ecb3ffe1b3f75a0054248ecd42dc3928febc35 not found: ID does not exist" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.426758 4813 scope.go:117] "RemoveContainer" containerID="b3b3bb803a666c3ddba7766b82bf2951f876f7a672202a9d113518522b8b5549" Feb 17 09:03:06 crc kubenswrapper[4813]: E0217 09:03:06.427177 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3b3bb803a666c3ddba7766b82bf2951f876f7a672202a9d113518522b8b5549\": container with ID starting with b3b3bb803a666c3ddba7766b82bf2951f876f7a672202a9d113518522b8b5549 not found: ID does not exist" containerID="b3b3bb803a666c3ddba7766b82bf2951f876f7a672202a9d113518522b8b5549" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.427224 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3b3bb803a666c3ddba7766b82bf2951f876f7a672202a9d113518522b8b5549"} err="failed to get container status \"b3b3bb803a666c3ddba7766b82bf2951f876f7a672202a9d113518522b8b5549\": rpc error: code = NotFound desc = could not find container \"b3b3bb803a666c3ddba7766b82bf2951f876f7a672202a9d113518522b8b5549\": container with ID starting with b3b3bb803a666c3ddba7766b82bf2951f876f7a672202a9d113518522b8b5549 not found: ID does not exist" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.427258 4813 scope.go:117] "RemoveContainer" containerID="27facf81c254e2218a9f46753b278637733d16352f1f7e9a1f19b311e1617e1c" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.455977 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:06 crc kubenswrapper[4813]: E0217 09:03:06.456351 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e60460-602e-4976-ba9b-cd36c2dcc673" containerName="watcher-kuttl-api-log" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.456370 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e60460-602e-4976-ba9b-cd36c2dcc673" containerName="watcher-kuttl-api-log" Feb 17 09:03:06 crc kubenswrapper[4813]: E0217 09:03:06.456392 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="ceilometer-notification-agent" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.456399 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="ceilometer-notification-agent" Feb 17 09:03:06 crc kubenswrapper[4813]: E0217 09:03:06.456415 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e60460-602e-4976-ba9b-cd36c2dcc673" containerName="watcher-api" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.456422 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e60460-602e-4976-ba9b-cd36c2dcc673" containerName="watcher-api" Feb 17 09:03:06 crc kubenswrapper[4813]: E0217 09:03:06.456434 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="proxy-httpd" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.456441 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="proxy-httpd" Feb 17 09:03:06 crc kubenswrapper[4813]: E0217 09:03:06.456454 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="sg-core" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.456463 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="sg-core" Feb 17 09:03:06 crc kubenswrapper[4813]: E0217 09:03:06.456484 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="ceilometer-central-agent" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.456490 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="ceilometer-central-agent" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.456644 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="sg-core" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.456654 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="proxy-httpd" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.456665 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="ceilometer-central-agent" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.456674 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" containerName="ceilometer-notification-agent" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.456685 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e60460-602e-4976-ba9b-cd36c2dcc673" containerName="watcher-kuttl-api-log" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.456694 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e60460-602e-4976-ba9b-cd36c2dcc673" containerName="watcher-api" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.457778 4813 scope.go:117] "RemoveContainer" containerID="de37a600b844daea3af4ddd92b585f1e34c431776704fe3d5e81db0aa9c6683b" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.458301 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.460180 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.460517 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.460764 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.473531 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.499883 4813 scope.go:117] "RemoveContainer" containerID="3c9e5e1dac1b7b54bb6aa2b200069101094aebcd419416e991ec909f52f75ab3" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.559472 4813 scope.go:117] "RemoveContainer" containerID="63ab09f59a9c4114e5bc8f2bef993ffed37d76807dec985a163960a1e7473978" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.577218 4813 scope.go:117] "RemoveContainer" containerID="27facf81c254e2218a9f46753b278637733d16352f1f7e9a1f19b311e1617e1c" Feb 17 09:03:06 crc kubenswrapper[4813]: E0217 09:03:06.577674 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27facf81c254e2218a9f46753b278637733d16352f1f7e9a1f19b311e1617e1c\": container with ID starting with 27facf81c254e2218a9f46753b278637733d16352f1f7e9a1f19b311e1617e1c not found: ID does not exist" containerID="27facf81c254e2218a9f46753b278637733d16352f1f7e9a1f19b311e1617e1c" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.577752 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27facf81c254e2218a9f46753b278637733d16352f1f7e9a1f19b311e1617e1c"} err="failed to get container status \"27facf81c254e2218a9f46753b278637733d16352f1f7e9a1f19b311e1617e1c\": rpc error: code = NotFound desc = could not find container \"27facf81c254e2218a9f46753b278637733d16352f1f7e9a1f19b311e1617e1c\": container with ID starting with 27facf81c254e2218a9f46753b278637733d16352f1f7e9a1f19b311e1617e1c not found: ID does not exist" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.577786 4813 scope.go:117] "RemoveContainer" containerID="de37a600b844daea3af4ddd92b585f1e34c431776704fe3d5e81db0aa9c6683b" Feb 17 09:03:06 crc kubenswrapper[4813]: E0217 09:03:06.578282 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de37a600b844daea3af4ddd92b585f1e34c431776704fe3d5e81db0aa9c6683b\": container with ID starting with de37a600b844daea3af4ddd92b585f1e34c431776704fe3d5e81db0aa9c6683b not found: ID does not exist" containerID="de37a600b844daea3af4ddd92b585f1e34c431776704fe3d5e81db0aa9c6683b" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.578327 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de37a600b844daea3af4ddd92b585f1e34c431776704fe3d5e81db0aa9c6683b"} err="failed to get container status \"de37a600b844daea3af4ddd92b585f1e34c431776704fe3d5e81db0aa9c6683b\": rpc error: code = NotFound desc = could not find container \"de37a600b844daea3af4ddd92b585f1e34c431776704fe3d5e81db0aa9c6683b\": container with ID starting with de37a600b844daea3af4ddd92b585f1e34c431776704fe3d5e81db0aa9c6683b not found: ID does not exist" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.578353 4813 scope.go:117] "RemoveContainer" containerID="3c9e5e1dac1b7b54bb6aa2b200069101094aebcd419416e991ec909f52f75ab3" Feb 17 09:03:06 crc kubenswrapper[4813]: E0217 09:03:06.578627 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c9e5e1dac1b7b54bb6aa2b200069101094aebcd419416e991ec909f52f75ab3\": container with ID starting with 3c9e5e1dac1b7b54bb6aa2b200069101094aebcd419416e991ec909f52f75ab3 not found: ID does not exist" containerID="3c9e5e1dac1b7b54bb6aa2b200069101094aebcd419416e991ec909f52f75ab3" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.578653 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9e5e1dac1b7b54bb6aa2b200069101094aebcd419416e991ec909f52f75ab3"} err="failed to get container status \"3c9e5e1dac1b7b54bb6aa2b200069101094aebcd419416e991ec909f52f75ab3\": rpc error: code = NotFound desc = could not find container \"3c9e5e1dac1b7b54bb6aa2b200069101094aebcd419416e991ec909f52f75ab3\": container with ID starting with 3c9e5e1dac1b7b54bb6aa2b200069101094aebcd419416e991ec909f52f75ab3 not found: ID does not exist" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.578669 4813 scope.go:117] "RemoveContainer" containerID="63ab09f59a9c4114e5bc8f2bef993ffed37d76807dec985a163960a1e7473978" Feb 17 09:03:06 crc kubenswrapper[4813]: E0217 09:03:06.578860 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63ab09f59a9c4114e5bc8f2bef993ffed37d76807dec985a163960a1e7473978\": container with ID starting with 63ab09f59a9c4114e5bc8f2bef993ffed37d76807dec985a163960a1e7473978 not found: ID does not exist" containerID="63ab09f59a9c4114e5bc8f2bef993ffed37d76807dec985a163960a1e7473978" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.578879 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63ab09f59a9c4114e5bc8f2bef993ffed37d76807dec985a163960a1e7473978"} err="failed to get container status \"63ab09f59a9c4114e5bc8f2bef993ffed37d76807dec985a163960a1e7473978\": rpc error: code = NotFound desc = could not find container \"63ab09f59a9c4114e5bc8f2bef993ffed37d76807dec985a163960a1e7473978\": container with ID starting with 63ab09f59a9c4114e5bc8f2bef993ffed37d76807dec985a163960a1e7473978 not found: ID does not exist" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.578892 4813 scope.go:117] "RemoveContainer" containerID="e4284fb46e7e232957891008abfaed8819255b12cf7fd236cbf3602b7e1a318a" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.658729 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8549c45-ddd2-4a25-904b-de7f41318de6-log-httpd\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.658776 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.658893 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.658975 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-scripts\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.659018 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdx99\" (UniqueName: \"kubernetes.io/projected/d8549c45-ddd2-4a25-904b-de7f41318de6-kube-api-access-fdx99\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.659130 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.659157 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8549c45-ddd2-4a25-904b-de7f41318de6-run-httpd\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.659372 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-config-data\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.760984 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.761416 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8549c45-ddd2-4a25-904b-de7f41318de6-run-httpd\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.761555 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-config-data\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.761689 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8549c45-ddd2-4a25-904b-de7f41318de6-log-httpd\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.761728 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.761758 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-scripts\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.761785 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.761822 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdx99\" (UniqueName: \"kubernetes.io/projected/d8549c45-ddd2-4a25-904b-de7f41318de6-kube-api-access-fdx99\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.762113 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8549c45-ddd2-4a25-904b-de7f41318de6-run-httpd\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.763058 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8549c45-ddd2-4a25-904b-de7f41318de6-log-httpd\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.771300 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-scripts\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.777209 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.778561 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-config-data\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.786176 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: E0217 09:03:06.792659 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9fce49d514432d5183bc04291dfddedb2261b05742cd9a65a2872212658d5fe9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.793345 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: E0217 09:03:06.798222 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9fce49d514432d5183bc04291dfddedb2261b05742cd9a65a2872212658d5fe9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:03:06 crc kubenswrapper[4813]: E0217 09:03:06.800087 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9fce49d514432d5183bc04291dfddedb2261b05742cd9a65a2872212658d5fe9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:03:06 crc kubenswrapper[4813]: E0217 09:03:06.800151 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="dc0e3ff0-0206-476b-8c76-0bba9ae5e484" containerName="watcher-applier" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.813463 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdx99\" (UniqueName: \"kubernetes.io/projected/d8549c45-ddd2-4a25-904b-de7f41318de6-kube-api-access-fdx99\") pod \"ceilometer-0\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.934679 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:06 crc kubenswrapper[4813]: I0217 09:03:06.935383 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:07 crc kubenswrapper[4813]: I0217 09:03:07.135939 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b0af28b-1f27-4bc5-a3be-719d6be5cd9b" path="/var/lib/kubelet/pods/2b0af28b-1f27-4bc5-a3be-719d6be5cd9b/volumes" Feb 17 09:03:07 crc kubenswrapper[4813]: I0217 09:03:07.136944 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e60460-602e-4976-ba9b-cd36c2dcc673" path="/var/lib/kubelet/pods/53e60460-602e-4976-ba9b-cd36c2dcc673/volumes" Feb 17 09:03:07 crc kubenswrapper[4813]: I0217 09:03:07.414888 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:07 crc kubenswrapper[4813]: I0217 09:03:07.787867 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher905e-account-delete-sv6j6" Feb 17 09:03:07 crc kubenswrapper[4813]: I0217 09:03:07.980174 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrvjr\" (UniqueName: \"kubernetes.io/projected/f2059184-9627-4ec4-a119-46525b0239a0-kube-api-access-nrvjr\") pod \"f2059184-9627-4ec4-a119-46525b0239a0\" (UID: \"f2059184-9627-4ec4-a119-46525b0239a0\") " Feb 17 09:03:07 crc kubenswrapper[4813]: I0217 09:03:07.980632 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2059184-9627-4ec4-a119-46525b0239a0-operator-scripts\") pod \"f2059184-9627-4ec4-a119-46525b0239a0\" (UID: \"f2059184-9627-4ec4-a119-46525b0239a0\") " Feb 17 09:03:07 crc kubenswrapper[4813]: I0217 09:03:07.982008 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2059184-9627-4ec4-a119-46525b0239a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2059184-9627-4ec4-a119-46525b0239a0" (UID: "f2059184-9627-4ec4-a119-46525b0239a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:03:07 crc kubenswrapper[4813]: I0217 09:03:07.984940 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2059184-9627-4ec4-a119-46525b0239a0-kube-api-access-nrvjr" (OuterVolumeSpecName: "kube-api-access-nrvjr") pod "f2059184-9627-4ec4-a119-46525b0239a0" (UID: "f2059184-9627-4ec4-a119-46525b0239a0"). InnerVolumeSpecName "kube-api-access-nrvjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.082360 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrvjr\" (UniqueName: \"kubernetes.io/projected/f2059184-9627-4ec4-a119-46525b0239a0-kube-api-access-nrvjr\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.082397 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2059184-9627-4ec4-a119-46525b0239a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.441179 4813 generic.go:334] "Generic (PLEG): container finished" podID="dc0e3ff0-0206-476b-8c76-0bba9ae5e484" containerID="9fce49d514432d5183bc04291dfddedb2261b05742cd9a65a2872212658d5fe9" exitCode=0 Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.441263 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"dc0e3ff0-0206-476b-8c76-0bba9ae5e484","Type":"ContainerDied","Data":"9fce49d514432d5183bc04291dfddedb2261b05742cd9a65a2872212658d5fe9"} Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.442741 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d8549c45-ddd2-4a25-904b-de7f41318de6","Type":"ContainerStarted","Data":"a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6"} Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.442822 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d8549c45-ddd2-4a25-904b-de7f41318de6","Type":"ContainerStarted","Data":"61f94e46a5d6e4eff97304e6308df3055cb529052afab2c8e75cfbf356f6a909"} Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.443918 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher905e-account-delete-sv6j6" event={"ID":"f2059184-9627-4ec4-a119-46525b0239a0","Type":"ContainerDied","Data":"f3dd654ea54d740605ffeaf1a08a7c6d1e60337b6d2c740b94ea98cabad2d36f"} Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.443950 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3dd654ea54d740605ffeaf1a08a7c6d1e60337b6d2c740b94ea98cabad2d36f" Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.444009 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher905e-account-delete-sv6j6" Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.729923 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.898701 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwsqt\" (UniqueName: \"kubernetes.io/projected/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-kube-api-access-kwsqt\") pod \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\" (UID: \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\") " Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.899203 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-combined-ca-bundle\") pod \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\" (UID: \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\") " Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.899323 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-config-data\") pod \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\" (UID: \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\") " Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.899597 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-logs\") pod \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\" (UID: \"dc0e3ff0-0206-476b-8c76-0bba9ae5e484\") " Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.900009 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-logs" (OuterVolumeSpecName: "logs") pod "dc0e3ff0-0206-476b-8c76-0bba9ae5e484" (UID: "dc0e3ff0-0206-476b-8c76-0bba9ae5e484"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.906447 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-kube-api-access-kwsqt" (OuterVolumeSpecName: "kube-api-access-kwsqt") pod "dc0e3ff0-0206-476b-8c76-0bba9ae5e484" (UID: "dc0e3ff0-0206-476b-8c76-0bba9ae5e484"). InnerVolumeSpecName "kube-api-access-kwsqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.925422 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc0e3ff0-0206-476b-8c76-0bba9ae5e484" (UID: "dc0e3ff0-0206-476b-8c76-0bba9ae5e484"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:08 crc kubenswrapper[4813]: I0217 09:03:08.958414 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-config-data" (OuterVolumeSpecName: "config-data") pod "dc0e3ff0-0206-476b-8c76-0bba9ae5e484" (UID: "dc0e3ff0-0206-476b-8c76-0bba9ae5e484"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:09 crc kubenswrapper[4813]: I0217 09:03:09.002700 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:09 crc kubenswrapper[4813]: I0217 09:03:09.002736 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:09 crc kubenswrapper[4813]: I0217 09:03:09.002746 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:09 crc kubenswrapper[4813]: I0217 09:03:09.002756 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwsqt\" (UniqueName: \"kubernetes.io/projected/dc0e3ff0-0206-476b-8c76-0bba9ae5e484-kube-api-access-kwsqt\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:09 crc kubenswrapper[4813]: I0217 09:03:09.461840 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:09 crc kubenswrapper[4813]: I0217 09:03:09.461861 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"dc0e3ff0-0206-476b-8c76-0bba9ae5e484","Type":"ContainerDied","Data":"2e5b806b0ac4b2b746ec9ca74ff7889e361f7b4edd20142269b56bccdad75cde"} Feb 17 09:03:09 crc kubenswrapper[4813]: I0217 09:03:09.461938 4813 scope.go:117] "RemoveContainer" containerID="9fce49d514432d5183bc04291dfddedb2261b05742cd9a65a2872212658d5fe9" Feb 17 09:03:09 crc kubenswrapper[4813]: I0217 09:03:09.470771 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d8549c45-ddd2-4a25-904b-de7f41318de6","Type":"ContainerStarted","Data":"785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9"} Feb 17 09:03:09 crc kubenswrapper[4813]: I0217 09:03:09.503138 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:03:09 crc kubenswrapper[4813]: I0217 09:03:09.519249 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:03:09 crc kubenswrapper[4813]: I0217 09:03:09.579479 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-gj4tb"] Feb 17 09:03:09 crc kubenswrapper[4813]: I0217 09:03:09.585753 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-905e-account-create-update-vtdmc"] Feb 17 09:03:09 crc kubenswrapper[4813]: I0217 09:03:09.592694 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher905e-account-delete-sv6j6"] Feb 17 09:03:09 crc kubenswrapper[4813]: I0217 09:03:09.599217 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher905e-account-delete-sv6j6"] Feb 17 09:03:09 crc kubenswrapper[4813]: I0217 09:03:09.605420 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-gj4tb"] Feb 17 09:03:09 crc kubenswrapper[4813]: I0217 09:03:09.610681 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-905e-account-create-update-vtdmc"] Feb 17 09:03:10 crc kubenswrapper[4813]: I0217 09:03:10.482986 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d8549c45-ddd2-4a25-904b-de7f41318de6","Type":"ContainerStarted","Data":"5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3"} Feb 17 09:03:11 crc kubenswrapper[4813]: I0217 09:03:11.122972 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8d3bc13-7d2f-438d-9219-7b08d8390037" path="/var/lib/kubelet/pods/c8d3bc13-7d2f-438d-9219-7b08d8390037/volumes" Feb 17 09:03:11 crc kubenswrapper[4813]: I0217 09:03:11.124047 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc0e3ff0-0206-476b-8c76-0bba9ae5e484" path="/var/lib/kubelet/pods/dc0e3ff0-0206-476b-8c76-0bba9ae5e484/volumes" Feb 17 09:03:11 crc kubenswrapper[4813]: I0217 09:03:11.126257 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e847b9df-636c-485f-b51d-862673c13a58" path="/var/lib/kubelet/pods/e847b9df-636c-485f-b51d-862673c13a58/volumes" Feb 17 09:03:11 crc kubenswrapper[4813]: I0217 09:03:11.126994 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2059184-9627-4ec4-a119-46525b0239a0" path="/var/lib/kubelet/pods/f2059184-9627-4ec4-a119-46525b0239a0/volumes" Feb 17 09:03:11 crc kubenswrapper[4813]: I0217 09:03:11.496050 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d8549c45-ddd2-4a25-904b-de7f41318de6","Type":"ContainerStarted","Data":"a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b"} Feb 17 09:03:11 crc kubenswrapper[4813]: I0217 09:03:11.496200 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerName="ceilometer-central-agent" containerID="cri-o://a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6" gracePeriod=30 Feb 17 09:03:11 crc kubenswrapper[4813]: I0217 09:03:11.496502 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:11 crc kubenswrapper[4813]: I0217 09:03:11.496786 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerName="proxy-httpd" containerID="cri-o://a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b" gracePeriod=30 Feb 17 09:03:11 crc kubenswrapper[4813]: I0217 09:03:11.496838 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerName="sg-core" containerID="cri-o://5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3" gracePeriod=30 Feb 17 09:03:11 crc kubenswrapper[4813]: I0217 09:03:11.496897 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerName="ceilometer-notification-agent" containerID="cri-o://785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9" gracePeriod=30 Feb 17 09:03:11 crc kubenswrapper[4813]: I0217 09:03:11.539507 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.1410702329999998 podStartE2EDuration="5.539482319s" podCreationTimestamp="2026-02-17 09:03:06 +0000 UTC" firstStartedPulling="2026-02-17 09:03:07.43342333 +0000 UTC m=+1335.094184553" lastFinishedPulling="2026-02-17 09:03:10.831835416 +0000 UTC m=+1338.492596639" observedRunningTime="2026-02-17 09:03:11.532872881 +0000 UTC m=+1339.193634104" watchObservedRunningTime="2026-02-17 09:03:11.539482319 +0000 UTC m=+1339.200243582" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.264890 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.459713 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-scripts\") pod \"d8549c45-ddd2-4a25-904b-de7f41318de6\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.459797 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-sg-core-conf-yaml\") pod \"d8549c45-ddd2-4a25-904b-de7f41318de6\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.459854 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-config-data\") pod \"d8549c45-ddd2-4a25-904b-de7f41318de6\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.459957 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdx99\" (UniqueName: \"kubernetes.io/projected/d8549c45-ddd2-4a25-904b-de7f41318de6-kube-api-access-fdx99\") pod \"d8549c45-ddd2-4a25-904b-de7f41318de6\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.460008 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8549c45-ddd2-4a25-904b-de7f41318de6-log-httpd\") pod \"d8549c45-ddd2-4a25-904b-de7f41318de6\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.460055 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-combined-ca-bundle\") pod \"d8549c45-ddd2-4a25-904b-de7f41318de6\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.460115 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-ceilometer-tls-certs\") pod \"d8549c45-ddd2-4a25-904b-de7f41318de6\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.460186 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8549c45-ddd2-4a25-904b-de7f41318de6-run-httpd\") pod \"d8549c45-ddd2-4a25-904b-de7f41318de6\" (UID: \"d8549c45-ddd2-4a25-904b-de7f41318de6\") " Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.460652 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8549c45-ddd2-4a25-904b-de7f41318de6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d8549c45-ddd2-4a25-904b-de7f41318de6" (UID: "d8549c45-ddd2-4a25-904b-de7f41318de6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.460863 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8549c45-ddd2-4a25-904b-de7f41318de6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d8549c45-ddd2-4a25-904b-de7f41318de6" (UID: "d8549c45-ddd2-4a25-904b-de7f41318de6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.465861 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8549c45-ddd2-4a25-904b-de7f41318de6-kube-api-access-fdx99" (OuterVolumeSpecName: "kube-api-access-fdx99") pod "d8549c45-ddd2-4a25-904b-de7f41318de6" (UID: "d8549c45-ddd2-4a25-904b-de7f41318de6"). InnerVolumeSpecName "kube-api-access-fdx99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.477139 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-scripts" (OuterVolumeSpecName: "scripts") pod "d8549c45-ddd2-4a25-904b-de7f41318de6" (UID: "d8549c45-ddd2-4a25-904b-de7f41318de6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.497389 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d8549c45-ddd2-4a25-904b-de7f41318de6" (UID: "d8549c45-ddd2-4a25-904b-de7f41318de6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.509350 4813 generic.go:334] "Generic (PLEG): container finished" podID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerID="a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b" exitCode=0 Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.509394 4813 generic.go:334] "Generic (PLEG): container finished" podID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerID="5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3" exitCode=2 Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.509407 4813 generic.go:334] "Generic (PLEG): container finished" podID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerID="785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9" exitCode=0 Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.509419 4813 generic.go:334] "Generic (PLEG): container finished" podID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerID="a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6" exitCode=0 Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.509447 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d8549c45-ddd2-4a25-904b-de7f41318de6","Type":"ContainerDied","Data":"a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b"} Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.509481 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d8549c45-ddd2-4a25-904b-de7f41318de6","Type":"ContainerDied","Data":"5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3"} Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.509498 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d8549c45-ddd2-4a25-904b-de7f41318de6","Type":"ContainerDied","Data":"785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9"} Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.509513 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d8549c45-ddd2-4a25-904b-de7f41318de6","Type":"ContainerDied","Data":"a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6"} Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.509528 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d8549c45-ddd2-4a25-904b-de7f41318de6","Type":"ContainerDied","Data":"61f94e46a5d6e4eff97304e6308df3055cb529052afab2c8e75cfbf356f6a909"} Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.509551 4813 scope.go:117] "RemoveContainer" containerID="a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.509739 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.529061 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d8549c45-ddd2-4a25-904b-de7f41318de6" (UID: "d8549c45-ddd2-4a25-904b-de7f41318de6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.534975 4813 scope.go:117] "RemoveContainer" containerID="5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.539133 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8549c45-ddd2-4a25-904b-de7f41318de6" (UID: "d8549c45-ddd2-4a25-904b-de7f41318de6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.549627 4813 scope.go:117] "RemoveContainer" containerID="785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.562553 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.562593 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8549c45-ddd2-4a25-904b-de7f41318de6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.562606 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.562618 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.562631 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdx99\" (UniqueName: \"kubernetes.io/projected/d8549c45-ddd2-4a25-904b-de7f41318de6-kube-api-access-fdx99\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.562643 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d8549c45-ddd2-4a25-904b-de7f41318de6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.562654 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.570801 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-config-data" (OuterVolumeSpecName: "config-data") pod "d8549c45-ddd2-4a25-904b-de7f41318de6" (UID: "d8549c45-ddd2-4a25-904b-de7f41318de6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.576729 4813 scope.go:117] "RemoveContainer" containerID="a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.607751 4813 scope.go:117] "RemoveContainer" containerID="a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b" Feb 17 09:03:12 crc kubenswrapper[4813]: E0217 09:03:12.608559 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b\": container with ID starting with a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b not found: ID does not exist" containerID="a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.608635 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b"} err="failed to get container status \"a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b\": rpc error: code = NotFound desc = could not find container \"a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b\": container with ID starting with a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b not found: ID does not exist" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.608681 4813 scope.go:117] "RemoveContainer" containerID="5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3" Feb 17 09:03:12 crc kubenswrapper[4813]: E0217 09:03:12.609142 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3\": container with ID starting with 5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3 not found: ID does not exist" containerID="5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.609183 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3"} err="failed to get container status \"5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3\": rpc error: code = NotFound desc = could not find container \"5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3\": container with ID starting with 5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3 not found: ID does not exist" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.609212 4813 scope.go:117] "RemoveContainer" containerID="785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9" Feb 17 09:03:12 crc kubenswrapper[4813]: E0217 09:03:12.609550 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9\": container with ID starting with 785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9 not found: ID does not exist" containerID="785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.609597 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9"} err="failed to get container status \"785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9\": rpc error: code = NotFound desc = could not find container \"785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9\": container with ID starting with 785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9 not found: ID does not exist" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.609651 4813 scope.go:117] "RemoveContainer" containerID="a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6" Feb 17 09:03:12 crc kubenswrapper[4813]: E0217 09:03:12.609912 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6\": container with ID starting with a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6 not found: ID does not exist" containerID="a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.609934 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6"} err="failed to get container status \"a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6\": rpc error: code = NotFound desc = could not find container \"a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6\": container with ID starting with a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6 not found: ID does not exist" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.609948 4813 scope.go:117] "RemoveContainer" containerID="a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.610200 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b"} err="failed to get container status \"a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b\": rpc error: code = NotFound desc = could not find container \"a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b\": container with ID starting with a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b not found: ID does not exist" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.610230 4813 scope.go:117] "RemoveContainer" containerID="5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.610510 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3"} err="failed to get container status \"5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3\": rpc error: code = NotFound desc = could not find container \"5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3\": container with ID starting with 5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3 not found: ID does not exist" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.610554 4813 scope.go:117] "RemoveContainer" containerID="785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.610761 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9"} err="failed to get container status \"785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9\": rpc error: code = NotFound desc = could not find container \"785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9\": container with ID starting with 785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9 not found: ID does not exist" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.610779 4813 scope.go:117] "RemoveContainer" containerID="a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.610998 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6"} err="failed to get container status \"a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6\": rpc error: code = NotFound desc = could not find container \"a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6\": container with ID starting with a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6 not found: ID does not exist" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.611028 4813 scope.go:117] "RemoveContainer" containerID="a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.611276 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b"} err="failed to get container status \"a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b\": rpc error: code = NotFound desc = could not find container \"a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b\": container with ID starting with a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b not found: ID does not exist" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.611328 4813 scope.go:117] "RemoveContainer" containerID="5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.611539 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3"} err="failed to get container status \"5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3\": rpc error: code = NotFound desc = could not find container \"5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3\": container with ID starting with 5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3 not found: ID does not exist" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.611558 4813 scope.go:117] "RemoveContainer" containerID="785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.611783 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9"} err="failed to get container status \"785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9\": rpc error: code = NotFound desc = could not find container \"785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9\": container with ID starting with 785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9 not found: ID does not exist" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.611808 4813 scope.go:117] "RemoveContainer" containerID="a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.611998 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6"} err="failed to get container status \"a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6\": rpc error: code = NotFound desc = could not find container \"a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6\": container with ID starting with a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6 not found: ID does not exist" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.612017 4813 scope.go:117] "RemoveContainer" containerID="a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.612210 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b"} err="failed to get container status \"a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b\": rpc error: code = NotFound desc = could not find container \"a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b\": container with ID starting with a7de7e5e165bbd794732a26e7c2a9f5c9496628367f39a93a966a761b7e7e70b not found: ID does not exist" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.612235 4813 scope.go:117] "RemoveContainer" containerID="5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.612484 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3"} err="failed to get container status \"5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3\": rpc error: code = NotFound desc = could not find container \"5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3\": container with ID starting with 5a9c476caaa7f036b5aed8bf08234547a77456df6a44cad41c27d2cdd7fa0cf3 not found: ID does not exist" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.612501 4813 scope.go:117] "RemoveContainer" containerID="785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.612719 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9"} err="failed to get container status \"785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9\": rpc error: code = NotFound desc = could not find container \"785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9\": container with ID starting with 785d2591988a0d2c120595cb7a4335d849fb96131eab47108bfa5c20f3113af9 not found: ID does not exist" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.612742 4813 scope.go:117] "RemoveContainer" containerID="a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.612958 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6"} err="failed to get container status \"a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6\": rpc error: code = NotFound desc = could not find container \"a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6\": container with ID starting with a6508b264f579385cdd08cad0acd6267979ac1ae54abe319d7d95e1ae1f8ecb6 not found: ID does not exist" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.664843 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8549c45-ddd2-4a25-904b-de7f41318de6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.874493 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.887089 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.902123 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:12 crc kubenswrapper[4813]: E0217 09:03:12.902487 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerName="ceilometer-central-agent" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.902503 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerName="ceilometer-central-agent" Feb 17 09:03:12 crc kubenswrapper[4813]: E0217 09:03:12.902520 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerName="ceilometer-notification-agent" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.902529 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerName="ceilometer-notification-agent" Feb 17 09:03:12 crc kubenswrapper[4813]: E0217 09:03:12.902546 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc0e3ff0-0206-476b-8c76-0bba9ae5e484" containerName="watcher-applier" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.902553 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc0e3ff0-0206-476b-8c76-0bba9ae5e484" containerName="watcher-applier" Feb 17 09:03:12 crc kubenswrapper[4813]: E0217 09:03:12.902571 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerName="proxy-httpd" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.902580 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerName="proxy-httpd" Feb 17 09:03:12 crc kubenswrapper[4813]: E0217 09:03:12.902596 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerName="sg-core" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.902604 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerName="sg-core" Feb 17 09:03:12 crc kubenswrapper[4813]: E0217 09:03:12.902617 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2059184-9627-4ec4-a119-46525b0239a0" containerName="mariadb-account-delete" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.902625 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2059184-9627-4ec4-a119-46525b0239a0" containerName="mariadb-account-delete" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.902832 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2059184-9627-4ec4-a119-46525b0239a0" containerName="mariadb-account-delete" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.902849 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerName="ceilometer-notification-agent" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.902861 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc0e3ff0-0206-476b-8c76-0bba9ae5e484" containerName="watcher-applier" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.902874 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerName="sg-core" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.902893 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerName="proxy-httpd" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.902910 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" containerName="ceilometer-central-agent" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.905084 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.910686 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.910886 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.923159 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.956718 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.969486 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.969579 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.969645 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5rxb\" (UniqueName: \"kubernetes.io/projected/4f61d979-09fb-4aac-ad39-b8de4603b2b7-kube-api-access-h5rxb\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.969753 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-scripts\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.969885 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f61d979-09fb-4aac-ad39-b8de4603b2b7-run-httpd\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.969915 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.969938 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f61d979-09fb-4aac-ad39-b8de4603b2b7-log-httpd\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:12 crc kubenswrapper[4813]: I0217 09:03:12.969975 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-config-data\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.071639 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.071800 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.071848 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5rxb\" (UniqueName: \"kubernetes.io/projected/4f61d979-09fb-4aac-ad39-b8de4603b2b7-kube-api-access-h5rxb\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.071944 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-scripts\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.072022 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f61d979-09fb-4aac-ad39-b8de4603b2b7-run-httpd\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.072065 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.072108 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f61d979-09fb-4aac-ad39-b8de4603b2b7-log-httpd\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.072203 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-config-data\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.074768 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f61d979-09fb-4aac-ad39-b8de4603b2b7-run-httpd\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.075050 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f61d979-09fb-4aac-ad39-b8de4603b2b7-log-httpd\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.078292 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.078809 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-scripts\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.079085 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.081245 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-config-data\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.091464 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.092169 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5rxb\" (UniqueName: \"kubernetes.io/projected/4f61d979-09fb-4aac-ad39-b8de4603b2b7-kube-api-access-h5rxb\") pod \"ceilometer-0\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.134105 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8549c45-ddd2-4a25-904b-de7f41318de6" path="/var/lib/kubelet/pods/d8549c45-ddd2-4a25-904b-de7f41318de6/volumes" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.283295 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:13 crc kubenswrapper[4813]: I0217 09:03:13.768011 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.535551 4813 generic.go:334] "Generic (PLEG): container finished" podID="626d563a-6a24-4038-9ef0-aa109f1edbba" containerID="e6fe251ca4dc5b6e9c25c820b1e7acb8e35fef18b264e53b83fd275a695d1862" exitCode=0 Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.535727 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"626d563a-6a24-4038-9ef0-aa109f1edbba","Type":"ContainerDied","Data":"e6fe251ca4dc5b6e9c25c820b1e7acb8e35fef18b264e53b83fd275a695d1862"} Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.539969 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f61d979-09fb-4aac-ad39-b8de4603b2b7","Type":"ContainerStarted","Data":"213cd82ada153f0666d477c08566b8c0d988fab96f816d5736f152b4caf9fed1"} Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.540006 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f61d979-09fb-4aac-ad39-b8de4603b2b7","Type":"ContainerStarted","Data":"ee86c8851d6a77bb982a1b74d9d9131c15fd7788d9176a27255b026b30f6e54f"} Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.581980 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.706105 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/626d563a-6a24-4038-9ef0-aa109f1edbba-logs\") pod \"626d563a-6a24-4038-9ef0-aa109f1edbba\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.706506 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-combined-ca-bundle\") pod \"626d563a-6a24-4038-9ef0-aa109f1edbba\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.707369 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-custom-prometheus-ca\") pod \"626d563a-6a24-4038-9ef0-aa109f1edbba\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.707502 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-config-data\") pod \"626d563a-6a24-4038-9ef0-aa109f1edbba\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.707616 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnwzs\" (UniqueName: \"kubernetes.io/projected/626d563a-6a24-4038-9ef0-aa109f1edbba-kube-api-access-pnwzs\") pod \"626d563a-6a24-4038-9ef0-aa109f1edbba\" (UID: \"626d563a-6a24-4038-9ef0-aa109f1edbba\") " Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.706725 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/626d563a-6a24-4038-9ef0-aa109f1edbba-logs" (OuterVolumeSpecName: "logs") pod "626d563a-6a24-4038-9ef0-aa109f1edbba" (UID: "626d563a-6a24-4038-9ef0-aa109f1edbba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.712256 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/626d563a-6a24-4038-9ef0-aa109f1edbba-kube-api-access-pnwzs" (OuterVolumeSpecName: "kube-api-access-pnwzs") pod "626d563a-6a24-4038-9ef0-aa109f1edbba" (UID: "626d563a-6a24-4038-9ef0-aa109f1edbba"). InnerVolumeSpecName "kube-api-access-pnwzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.733868 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "626d563a-6a24-4038-9ef0-aa109f1edbba" (UID: "626d563a-6a24-4038-9ef0-aa109f1edbba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.742994 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "626d563a-6a24-4038-9ef0-aa109f1edbba" (UID: "626d563a-6a24-4038-9ef0-aa109f1edbba"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.756804 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-config-data" (OuterVolumeSpecName: "config-data") pod "626d563a-6a24-4038-9ef0-aa109f1edbba" (UID: "626d563a-6a24-4038-9ef0-aa109f1edbba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.809347 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/626d563a-6a24-4038-9ef0-aa109f1edbba-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.809415 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.809429 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.809440 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/626d563a-6a24-4038-9ef0-aa109f1edbba-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:14 crc kubenswrapper[4813]: I0217 09:03:14.809452 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnwzs\" (UniqueName: \"kubernetes.io/projected/626d563a-6a24-4038-9ef0-aa109f1edbba-kube-api-access-pnwzs\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:15 crc kubenswrapper[4813]: I0217 09:03:15.600197 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:15 crc kubenswrapper[4813]: I0217 09:03:15.601004 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"626d563a-6a24-4038-9ef0-aa109f1edbba","Type":"ContainerDied","Data":"14f79ac056602cac60bcc86468f3227f9b73877213a187a47678ade6df997f9f"} Feb 17 09:03:15 crc kubenswrapper[4813]: I0217 09:03:15.601040 4813 scope.go:117] "RemoveContainer" containerID="e6fe251ca4dc5b6e9c25c820b1e7acb8e35fef18b264e53b83fd275a695d1862" Feb 17 09:03:15 crc kubenswrapper[4813]: I0217 09:03:15.604937 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f61d979-09fb-4aac-ad39-b8de4603b2b7","Type":"ContainerStarted","Data":"bf31b6509d8841fc44a8a20b0afdb4f1b4bccfeb6fbd14e0231f69fbbc396f4f"} Feb 17 09:03:15 crc kubenswrapper[4813]: I0217 09:03:15.621982 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:03:15 crc kubenswrapper[4813]: I0217 09:03:15.627129 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:03:16 crc kubenswrapper[4813]: I0217 09:03:16.616808 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f61d979-09fb-4aac-ad39-b8de4603b2b7","Type":"ContainerStarted","Data":"b2b6c5bd177b66dadfa2fb97b314f5b396eeee81fc3e764980e2be1fcbd0281c"} Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.122197 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="626d563a-6a24-4038-9ef0-aa109f1edbba" path="/var/lib/kubelet/pods/626d563a-6a24-4038-9ef0-aa109f1edbba/volumes" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.424501 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-5bqjw"] Feb 17 09:03:17 crc kubenswrapper[4813]: E0217 09:03:17.425289 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="626d563a-6a24-4038-9ef0-aa109f1edbba" containerName="watcher-decision-engine" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.425340 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="626d563a-6a24-4038-9ef0-aa109f1edbba" containerName="watcher-decision-engine" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.425564 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="626d563a-6a24-4038-9ef0-aa109f1edbba" containerName="watcher-decision-engine" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.426188 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-5bqjw" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.432951 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-5bqjw"] Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.442402 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg"] Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.443435 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.447625 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.449800 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg"] Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.565240 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k26n\" (UniqueName: \"kubernetes.io/projected/d56492e9-7c2d-4713-8d1a-6948291ecd68-kube-api-access-4k26n\") pod \"watcher-db-create-5bqjw\" (UID: \"d56492e9-7c2d-4713-8d1a-6948291ecd68\") " pod="watcher-kuttl-default/watcher-db-create-5bqjw" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.565291 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56492e9-7c2d-4713-8d1a-6948291ecd68-operator-scripts\") pod \"watcher-db-create-5bqjw\" (UID: \"d56492e9-7c2d-4713-8d1a-6948291ecd68\") " pod="watcher-kuttl-default/watcher-db-create-5bqjw" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.565333 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/254d5e59-34ee-45db-a654-1277da38cbee-operator-scripts\") pod \"watcher-f5e5-account-create-update-5lpfg\" (UID: \"254d5e59-34ee-45db-a654-1277da38cbee\") " pod="watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.565370 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5xth\" (UniqueName: \"kubernetes.io/projected/254d5e59-34ee-45db-a654-1277da38cbee-kube-api-access-z5xth\") pod \"watcher-f5e5-account-create-update-5lpfg\" (UID: \"254d5e59-34ee-45db-a654-1277da38cbee\") " pod="watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.629685 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f61d979-09fb-4aac-ad39-b8de4603b2b7","Type":"ContainerStarted","Data":"62ec469cb319e062c5de55739e6bc6ad0a0fbfcdc999944560ae6bdf35155d03"} Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.630020 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.656426 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.462031548 podStartE2EDuration="5.656411839s" podCreationTimestamp="2026-02-17 09:03:12 +0000 UTC" firstStartedPulling="2026-02-17 09:03:13.780919599 +0000 UTC m=+1341.441680822" lastFinishedPulling="2026-02-17 09:03:16.97529989 +0000 UTC m=+1344.636061113" observedRunningTime="2026-02-17 09:03:17.655719549 +0000 UTC m=+1345.316480782" watchObservedRunningTime="2026-02-17 09:03:17.656411839 +0000 UTC m=+1345.317173062" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.667261 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56492e9-7c2d-4713-8d1a-6948291ecd68-operator-scripts\") pod \"watcher-db-create-5bqjw\" (UID: \"d56492e9-7c2d-4713-8d1a-6948291ecd68\") " pod="watcher-kuttl-default/watcher-db-create-5bqjw" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.667467 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/254d5e59-34ee-45db-a654-1277da38cbee-operator-scripts\") pod \"watcher-f5e5-account-create-update-5lpfg\" (UID: \"254d5e59-34ee-45db-a654-1277da38cbee\") " pod="watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.667589 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5xth\" (UniqueName: \"kubernetes.io/projected/254d5e59-34ee-45db-a654-1277da38cbee-kube-api-access-z5xth\") pod \"watcher-f5e5-account-create-update-5lpfg\" (UID: \"254d5e59-34ee-45db-a654-1277da38cbee\") " pod="watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.667863 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k26n\" (UniqueName: \"kubernetes.io/projected/d56492e9-7c2d-4713-8d1a-6948291ecd68-kube-api-access-4k26n\") pod \"watcher-db-create-5bqjw\" (UID: \"d56492e9-7c2d-4713-8d1a-6948291ecd68\") " pod="watcher-kuttl-default/watcher-db-create-5bqjw" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.668214 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/254d5e59-34ee-45db-a654-1277da38cbee-operator-scripts\") pod \"watcher-f5e5-account-create-update-5lpfg\" (UID: \"254d5e59-34ee-45db-a654-1277da38cbee\") " pod="watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.668255 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56492e9-7c2d-4713-8d1a-6948291ecd68-operator-scripts\") pod \"watcher-db-create-5bqjw\" (UID: \"d56492e9-7c2d-4713-8d1a-6948291ecd68\") " pod="watcher-kuttl-default/watcher-db-create-5bqjw" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.690975 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k26n\" (UniqueName: \"kubernetes.io/projected/d56492e9-7c2d-4713-8d1a-6948291ecd68-kube-api-access-4k26n\") pod \"watcher-db-create-5bqjw\" (UID: \"d56492e9-7c2d-4713-8d1a-6948291ecd68\") " pod="watcher-kuttl-default/watcher-db-create-5bqjw" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.692851 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5xth\" (UniqueName: \"kubernetes.io/projected/254d5e59-34ee-45db-a654-1277da38cbee-kube-api-access-z5xth\") pod \"watcher-f5e5-account-create-update-5lpfg\" (UID: \"254d5e59-34ee-45db-a654-1277da38cbee\") " pod="watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.748806 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-5bqjw" Feb 17 09:03:17 crc kubenswrapper[4813]: I0217 09:03:17.765914 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg" Feb 17 09:03:18 crc kubenswrapper[4813]: I0217 09:03:18.343077 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-5bqjw"] Feb 17 09:03:18 crc kubenswrapper[4813]: I0217 09:03:18.473023 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg"] Feb 17 09:03:18 crc kubenswrapper[4813]: I0217 09:03:18.637907 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-5bqjw" event={"ID":"d56492e9-7c2d-4713-8d1a-6948291ecd68","Type":"ContainerStarted","Data":"0ed3187798b9b41634e1666debefc88606d08d759b9e38df18732b2cf37adc53"} Feb 17 09:03:18 crc kubenswrapper[4813]: I0217 09:03:18.637955 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-5bqjw" event={"ID":"d56492e9-7c2d-4713-8d1a-6948291ecd68","Type":"ContainerStarted","Data":"a741a69e8aa0f183fcc30a0e9f9854a6bd1e47d00a60f57d56a15631ba6ff3a4"} Feb 17 09:03:18 crc kubenswrapper[4813]: I0217 09:03:18.639472 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg" event={"ID":"254d5e59-34ee-45db-a654-1277da38cbee","Type":"ContainerStarted","Data":"8a5d3e90f4b5d8a8b468c1e1657308654930a3987a00e4a3f0106ff143c20bbd"} Feb 17 09:03:18 crc kubenswrapper[4813]: I0217 09:03:18.662991 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-db-create-5bqjw" podStartSLOduration=1.6629739159999999 podStartE2EDuration="1.662973916s" podCreationTimestamp="2026-02-17 09:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:03:18.659445296 +0000 UTC m=+1346.320206529" watchObservedRunningTime="2026-02-17 09:03:18.662973916 +0000 UTC m=+1346.323735139" Feb 17 09:03:19 crc kubenswrapper[4813]: I0217 09:03:19.651650 4813 generic.go:334] "Generic (PLEG): container finished" podID="d56492e9-7c2d-4713-8d1a-6948291ecd68" containerID="0ed3187798b9b41634e1666debefc88606d08d759b9e38df18732b2cf37adc53" exitCode=0 Feb 17 09:03:19 crc kubenswrapper[4813]: I0217 09:03:19.651916 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-5bqjw" event={"ID":"d56492e9-7c2d-4713-8d1a-6948291ecd68","Type":"ContainerDied","Data":"0ed3187798b9b41634e1666debefc88606d08d759b9e38df18732b2cf37adc53"} Feb 17 09:03:19 crc kubenswrapper[4813]: I0217 09:03:19.656692 4813 generic.go:334] "Generic (PLEG): container finished" podID="254d5e59-34ee-45db-a654-1277da38cbee" containerID="d1655e624d8f359518fde38db62600d3511de599fe709779972ecb70c41196a3" exitCode=0 Feb 17 09:03:19 crc kubenswrapper[4813]: I0217 09:03:19.656715 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg" event={"ID":"254d5e59-34ee-45db-a654-1277da38cbee","Type":"ContainerDied","Data":"d1655e624d8f359518fde38db62600d3511de599fe709779972ecb70c41196a3"} Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.076013 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg" Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.138457 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5xth\" (UniqueName: \"kubernetes.io/projected/254d5e59-34ee-45db-a654-1277da38cbee-kube-api-access-z5xth\") pod \"254d5e59-34ee-45db-a654-1277da38cbee\" (UID: \"254d5e59-34ee-45db-a654-1277da38cbee\") " Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.138533 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/254d5e59-34ee-45db-a654-1277da38cbee-operator-scripts\") pod \"254d5e59-34ee-45db-a654-1277da38cbee\" (UID: \"254d5e59-34ee-45db-a654-1277da38cbee\") " Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.139369 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/254d5e59-34ee-45db-a654-1277da38cbee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "254d5e59-34ee-45db-a654-1277da38cbee" (UID: "254d5e59-34ee-45db-a654-1277da38cbee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.139706 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/254d5e59-34ee-45db-a654-1277da38cbee-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.145042 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/254d5e59-34ee-45db-a654-1277da38cbee-kube-api-access-z5xth" (OuterVolumeSpecName: "kube-api-access-z5xth") pod "254d5e59-34ee-45db-a654-1277da38cbee" (UID: "254d5e59-34ee-45db-a654-1277da38cbee"). InnerVolumeSpecName "kube-api-access-z5xth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.187127 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-5bqjw" Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.240625 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k26n\" (UniqueName: \"kubernetes.io/projected/d56492e9-7c2d-4713-8d1a-6948291ecd68-kube-api-access-4k26n\") pod \"d56492e9-7c2d-4713-8d1a-6948291ecd68\" (UID: \"d56492e9-7c2d-4713-8d1a-6948291ecd68\") " Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.240705 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56492e9-7c2d-4713-8d1a-6948291ecd68-operator-scripts\") pod \"d56492e9-7c2d-4713-8d1a-6948291ecd68\" (UID: \"d56492e9-7c2d-4713-8d1a-6948291ecd68\") " Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.241073 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5xth\" (UniqueName: \"kubernetes.io/projected/254d5e59-34ee-45db-a654-1277da38cbee-kube-api-access-z5xth\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.241381 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56492e9-7c2d-4713-8d1a-6948291ecd68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d56492e9-7c2d-4713-8d1a-6948291ecd68" (UID: "d56492e9-7c2d-4713-8d1a-6948291ecd68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.244021 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d56492e9-7c2d-4713-8d1a-6948291ecd68-kube-api-access-4k26n" (OuterVolumeSpecName: "kube-api-access-4k26n") pod "d56492e9-7c2d-4713-8d1a-6948291ecd68" (UID: "d56492e9-7c2d-4713-8d1a-6948291ecd68"). InnerVolumeSpecName "kube-api-access-4k26n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.342162 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k26n\" (UniqueName: \"kubernetes.io/projected/d56492e9-7c2d-4713-8d1a-6948291ecd68-kube-api-access-4k26n\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.342213 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d56492e9-7c2d-4713-8d1a-6948291ecd68-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.677765 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-5bqjw" event={"ID":"d56492e9-7c2d-4713-8d1a-6948291ecd68","Type":"ContainerDied","Data":"a741a69e8aa0f183fcc30a0e9f9854a6bd1e47d00a60f57d56a15631ba6ff3a4"} Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.677849 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a741a69e8aa0f183fcc30a0e9f9854a6bd1e47d00a60f57d56a15631ba6ff3a4" Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.677926 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-5bqjw" Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.683052 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg" event={"ID":"254d5e59-34ee-45db-a654-1277da38cbee","Type":"ContainerDied","Data":"8a5d3e90f4b5d8a8b468c1e1657308654930a3987a00e4a3f0106ff143c20bbd"} Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.683113 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a5d3e90f4b5d8a8b468c1e1657308654930a3987a00e4a3f0106ff143c20bbd" Feb 17 09:03:21 crc kubenswrapper[4813]: I0217 09:03:21.683194 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.789267 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-278nl"] Feb 17 09:03:22 crc kubenswrapper[4813]: E0217 09:03:22.790580 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56492e9-7c2d-4713-8d1a-6948291ecd68" containerName="mariadb-database-create" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.790658 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56492e9-7c2d-4713-8d1a-6948291ecd68" containerName="mariadb-database-create" Feb 17 09:03:22 crc kubenswrapper[4813]: E0217 09:03:22.790718 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="254d5e59-34ee-45db-a654-1277da38cbee" containerName="mariadb-account-create-update" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.790779 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="254d5e59-34ee-45db-a654-1277da38cbee" containerName="mariadb-account-create-update" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.790998 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="254d5e59-34ee-45db-a654-1277da38cbee" containerName="mariadb-account-create-update" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.791088 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56492e9-7c2d-4713-8d1a-6948291ecd68" containerName="mariadb-database-create" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.791632 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.796644 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-2jcjs" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.796907 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.803028 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-278nl"] Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.868688 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-278nl\" (UID: \"564a261f-8d72-4cce-9565-ff66263ae004\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.868785 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdhbg\" (UniqueName: \"kubernetes.io/projected/564a261f-8d72-4cce-9565-ff66263ae004-kube-api-access-vdhbg\") pod \"watcher-kuttl-db-sync-278nl\" (UID: \"564a261f-8d72-4cce-9565-ff66263ae004\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.868892 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-config-data\") pod \"watcher-kuttl-db-sync-278nl\" (UID: \"564a261f-8d72-4cce-9565-ff66263ae004\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.869019 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-db-sync-config-data\") pod \"watcher-kuttl-db-sync-278nl\" (UID: \"564a261f-8d72-4cce-9565-ff66263ae004\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.970437 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-db-sync-config-data\") pod \"watcher-kuttl-db-sync-278nl\" (UID: \"564a261f-8d72-4cce-9565-ff66263ae004\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.970575 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-278nl\" (UID: \"564a261f-8d72-4cce-9565-ff66263ae004\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.970605 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdhbg\" (UniqueName: \"kubernetes.io/projected/564a261f-8d72-4cce-9565-ff66263ae004-kube-api-access-vdhbg\") pod \"watcher-kuttl-db-sync-278nl\" (UID: \"564a261f-8d72-4cce-9565-ff66263ae004\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.970651 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-config-data\") pod \"watcher-kuttl-db-sync-278nl\" (UID: \"564a261f-8d72-4cce-9565-ff66263ae004\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.977030 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-config-data\") pod \"watcher-kuttl-db-sync-278nl\" (UID: \"564a261f-8d72-4cce-9565-ff66263ae004\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.978216 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-db-sync-config-data\") pod \"watcher-kuttl-db-sync-278nl\" (UID: \"564a261f-8d72-4cce-9565-ff66263ae004\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.991023 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-278nl\" (UID: \"564a261f-8d72-4cce-9565-ff66263ae004\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" Feb 17 09:03:22 crc kubenswrapper[4813]: I0217 09:03:22.993191 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdhbg\" (UniqueName: \"kubernetes.io/projected/564a261f-8d72-4cce-9565-ff66263ae004-kube-api-access-vdhbg\") pod \"watcher-kuttl-db-sync-278nl\" (UID: \"564a261f-8d72-4cce-9565-ff66263ae004\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" Feb 17 09:03:23 crc kubenswrapper[4813]: I0217 09:03:23.129421 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" Feb 17 09:03:23 crc kubenswrapper[4813]: I0217 09:03:23.597647 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-278nl"] Feb 17 09:03:23 crc kubenswrapper[4813]: I0217 09:03:23.709437 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" event={"ID":"564a261f-8d72-4cce-9565-ff66263ae004","Type":"ContainerStarted","Data":"c3dbaf4c1cdc956dfa875896424a6852a86bac77cd02a0f7577b96adaaed340c"} Feb 17 09:03:24 crc kubenswrapper[4813]: I0217 09:03:24.718902 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" event={"ID":"564a261f-8d72-4cce-9565-ff66263ae004","Type":"ContainerStarted","Data":"ff1a41b9d826bf0a13960d2a7308985735df3748702de020ae9e8a1ff3d8fe2e"} Feb 17 09:03:24 crc kubenswrapper[4813]: I0217 09:03:24.741954 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" podStartSLOduration=2.741930945 podStartE2EDuration="2.741930945s" podCreationTimestamp="2026-02-17 09:03:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:03:24.733597518 +0000 UTC m=+1352.394358741" watchObservedRunningTime="2026-02-17 09:03:24.741930945 +0000 UTC m=+1352.402692178" Feb 17 09:03:26 crc kubenswrapper[4813]: I0217 09:03:26.745858 4813 generic.go:334] "Generic (PLEG): container finished" podID="564a261f-8d72-4cce-9565-ff66263ae004" containerID="ff1a41b9d826bf0a13960d2a7308985735df3748702de020ae9e8a1ff3d8fe2e" exitCode=0 Feb 17 09:03:26 crc kubenswrapper[4813]: I0217 09:03:26.746216 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" event={"ID":"564a261f-8d72-4cce-9565-ff66263ae004","Type":"ContainerDied","Data":"ff1a41b9d826bf0a13960d2a7308985735df3748702de020ae9e8a1ff3d8fe2e"} Feb 17 09:03:28 crc kubenswrapper[4813]: I0217 09:03:28.173105 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" Feb 17 09:03:28 crc kubenswrapper[4813]: I0217 09:03:28.358119 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdhbg\" (UniqueName: \"kubernetes.io/projected/564a261f-8d72-4cce-9565-ff66263ae004-kube-api-access-vdhbg\") pod \"564a261f-8d72-4cce-9565-ff66263ae004\" (UID: \"564a261f-8d72-4cce-9565-ff66263ae004\") " Feb 17 09:03:28 crc kubenswrapper[4813]: I0217 09:03:28.358213 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-config-data\") pod \"564a261f-8d72-4cce-9565-ff66263ae004\" (UID: \"564a261f-8d72-4cce-9565-ff66263ae004\") " Feb 17 09:03:28 crc kubenswrapper[4813]: I0217 09:03:28.358286 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-combined-ca-bundle\") pod \"564a261f-8d72-4cce-9565-ff66263ae004\" (UID: \"564a261f-8d72-4cce-9565-ff66263ae004\") " Feb 17 09:03:28 crc kubenswrapper[4813]: I0217 09:03:28.358516 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-db-sync-config-data\") pod \"564a261f-8d72-4cce-9565-ff66263ae004\" (UID: \"564a261f-8d72-4cce-9565-ff66263ae004\") " Feb 17 09:03:28 crc kubenswrapper[4813]: I0217 09:03:28.367981 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "564a261f-8d72-4cce-9565-ff66263ae004" (UID: "564a261f-8d72-4cce-9565-ff66263ae004"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:28 crc kubenswrapper[4813]: I0217 09:03:28.368577 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564a261f-8d72-4cce-9565-ff66263ae004-kube-api-access-vdhbg" (OuterVolumeSpecName: "kube-api-access-vdhbg") pod "564a261f-8d72-4cce-9565-ff66263ae004" (UID: "564a261f-8d72-4cce-9565-ff66263ae004"). InnerVolumeSpecName "kube-api-access-vdhbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:28 crc kubenswrapper[4813]: I0217 09:03:28.411563 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "564a261f-8d72-4cce-9565-ff66263ae004" (UID: "564a261f-8d72-4cce-9565-ff66263ae004"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:28 crc kubenswrapper[4813]: I0217 09:03:28.438064 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-config-data" (OuterVolumeSpecName: "config-data") pod "564a261f-8d72-4cce-9565-ff66263ae004" (UID: "564a261f-8d72-4cce-9565-ff66263ae004"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:28 crc kubenswrapper[4813]: I0217 09:03:28.460663 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:28 crc kubenswrapper[4813]: I0217 09:03:28.460696 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:28 crc kubenswrapper[4813]: I0217 09:03:28.460707 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/564a261f-8d72-4cce-9565-ff66263ae004-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:28 crc kubenswrapper[4813]: I0217 09:03:28.460716 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdhbg\" (UniqueName: \"kubernetes.io/projected/564a261f-8d72-4cce-9565-ff66263ae004-kube-api-access-vdhbg\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:28 crc kubenswrapper[4813]: I0217 09:03:28.769160 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" event={"ID":"564a261f-8d72-4cce-9565-ff66263ae004","Type":"ContainerDied","Data":"c3dbaf4c1cdc956dfa875896424a6852a86bac77cd02a0f7577b96adaaed340c"} Feb 17 09:03:28 crc kubenswrapper[4813]: I0217 09:03:28.769428 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3dbaf4c1cdc956dfa875896424a6852a86bac77cd02a0f7577b96adaaed340c" Feb 17 09:03:28 crc kubenswrapper[4813]: I0217 09:03:28.769427 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-278nl" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.041787 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:03:29 crc kubenswrapper[4813]: E0217 09:03:29.042104 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564a261f-8d72-4cce-9565-ff66263ae004" containerName="watcher-kuttl-db-sync" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.042120 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="564a261f-8d72-4cce-9565-ff66263ae004" containerName="watcher-kuttl-db-sync" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.042281 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="564a261f-8d72-4cce-9565-ff66263ae004" containerName="watcher-kuttl-db-sync" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.042778 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.044688 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.045444 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-2jcjs" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.058083 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.104907 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.105947 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.109774 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.124224 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.161102 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.162491 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.165060 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.165811 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.166056 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.174000 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.174039 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.174103 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jtdv\" (UniqueName: \"kubernetes.io/projected/8131fc07-9b89-45bb-bebb-a1630a6120af-kube-api-access-4jtdv\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.174132 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.174144 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8131fc07-9b89-45bb-bebb-a1630a6120af-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.174213 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.277872 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv67b\" (UniqueName: \"kubernetes.io/projected/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-kube-api-access-tv67b\") pod \"watcher-kuttl-applier-0\" (UID: \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.277906 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.277950 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.277966 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.277997 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.278017 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.278033 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.278058 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.278115 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnftp\" (UniqueName: \"kubernetes.io/projected/35ab64fa-2456-4a42-bdf7-69664e8e5144-kube-api-access-gnftp\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.278153 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35ab64fa-2456-4a42-bdf7-69664e8e5144-logs\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.278172 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jtdv\" (UniqueName: \"kubernetes.io/projected/8131fc07-9b89-45bb-bebb-a1630a6120af-kube-api-access-4jtdv\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.278191 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.278223 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.278242 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8131fc07-9b89-45bb-bebb-a1630a6120af-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.278260 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.278286 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.280154 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8131fc07-9b89-45bb-bebb-a1630a6120af-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.288395 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.290883 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.293923 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.306552 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jtdv\" (UniqueName: \"kubernetes.io/projected/8131fc07-9b89-45bb-bebb-a1630a6120af-kube-api-access-4jtdv\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.379552 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.379625 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnftp\" (UniqueName: \"kubernetes.io/projected/35ab64fa-2456-4a42-bdf7-69664e8e5144-kube-api-access-gnftp\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.379670 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35ab64fa-2456-4a42-bdf7-69664e8e5144-logs\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.379704 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.379750 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.379794 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.379829 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv67b\" (UniqueName: \"kubernetes.io/projected/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-kube-api-access-tv67b\") pod \"watcher-kuttl-applier-0\" (UID: \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.379853 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.379896 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.379921 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.379955 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.380201 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35ab64fa-2456-4a42-bdf7-69664e8e5144-logs\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.380878 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.382716 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.383205 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.385349 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.386878 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.387315 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.388240 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.398026 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.401495 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.402760 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnftp\" (UniqueName: \"kubernetes.io/projected/35ab64fa-2456-4a42-bdf7-69664e8e5144-kube-api-access-gnftp\") pod \"watcher-kuttl-api-0\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.407425 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv67b\" (UniqueName: \"kubernetes.io/projected/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-kube-api-access-tv67b\") pod \"watcher-kuttl-applier-0\" (UID: \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.424436 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.501961 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:29 crc kubenswrapper[4813]: I0217 09:03:29.874180 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:03:30 crc kubenswrapper[4813]: I0217 09:03:30.029389 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:03:30 crc kubenswrapper[4813]: W0217 09:03:30.029644 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccdc699e_0784_4dc7_a515_bdd6d9d1d5a3.slice/crio-6ebb037d4b6a29466e8541e409283e78928b39f221a58f15321116cc21ef8a11 WatchSource:0}: Error finding container 6ebb037d4b6a29466e8541e409283e78928b39f221a58f15321116cc21ef8a11: Status 404 returned error can't find the container with id 6ebb037d4b6a29466e8541e409283e78928b39f221a58f15321116cc21ef8a11 Feb 17 09:03:30 crc kubenswrapper[4813]: I0217 09:03:30.164550 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:03:30 crc kubenswrapper[4813]: I0217 09:03:30.795253 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8131fc07-9b89-45bb-bebb-a1630a6120af","Type":"ContainerStarted","Data":"e388fe6699eb05965726d928b29d87cc57839d1d600c33bcb57b7a5f6f978e26"} Feb 17 09:03:30 crc kubenswrapper[4813]: I0217 09:03:30.795295 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8131fc07-9b89-45bb-bebb-a1630a6120af","Type":"ContainerStarted","Data":"fdd674f513761c382248a325049e6e5a971ec89593f24abdb7ae531af3b75ac4"} Feb 17 09:03:30 crc kubenswrapper[4813]: I0217 09:03:30.796994 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"35ab64fa-2456-4a42-bdf7-69664e8e5144","Type":"ContainerStarted","Data":"a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e"} Feb 17 09:03:30 crc kubenswrapper[4813]: I0217 09:03:30.797041 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"35ab64fa-2456-4a42-bdf7-69664e8e5144","Type":"ContainerStarted","Data":"eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3"} Feb 17 09:03:30 crc kubenswrapper[4813]: I0217 09:03:30.797059 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"35ab64fa-2456-4a42-bdf7-69664e8e5144","Type":"ContainerStarted","Data":"a23c4d7ae762c6dda2d1cfa2e506e16104fa3aae3aab020959ac975b74de1b6b"} Feb 17 09:03:30 crc kubenswrapper[4813]: I0217 09:03:30.797470 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:30 crc kubenswrapper[4813]: I0217 09:03:30.798749 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3","Type":"ContainerStarted","Data":"089070ddc59d14ba7b6ef59717317b103a0cb00150ca5dc217f9428f17b92c5a"} Feb 17 09:03:30 crc kubenswrapper[4813]: I0217 09:03:30.798789 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3","Type":"ContainerStarted","Data":"6ebb037d4b6a29466e8541e409283e78928b39f221a58f15321116cc21ef8a11"} Feb 17 09:03:30 crc kubenswrapper[4813]: I0217 09:03:30.816774 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.816762856 podStartE2EDuration="1.816762856s" podCreationTimestamp="2026-02-17 09:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:03:30.81512264 +0000 UTC m=+1358.475883863" watchObservedRunningTime="2026-02-17 09:03:30.816762856 +0000 UTC m=+1358.477524079" Feb 17 09:03:30 crc kubenswrapper[4813]: I0217 09:03:30.845298 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.845278488 podStartE2EDuration="1.845278488s" podCreationTimestamp="2026-02-17 09:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:03:30.841804509 +0000 UTC m=+1358.502565732" watchObservedRunningTime="2026-02-17 09:03:30.845278488 +0000 UTC m=+1358.506039721" Feb 17 09:03:30 crc kubenswrapper[4813]: I0217 09:03:30.863212 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.863193227 podStartE2EDuration="1.863193227s" podCreationTimestamp="2026-02-17 09:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:03:30.856039224 +0000 UTC m=+1358.516800447" watchObservedRunningTime="2026-02-17 09:03:30.863193227 +0000 UTC m=+1358.523954450" Feb 17 09:03:32 crc kubenswrapper[4813]: I0217 09:03:32.811839 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 09:03:32 crc kubenswrapper[4813]: I0217 09:03:32.939343 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:34 crc kubenswrapper[4813]: I0217 09:03:34.425574 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:34 crc kubenswrapper[4813]: I0217 09:03:34.502486 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:39 crc kubenswrapper[4813]: I0217 09:03:39.389543 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:39 crc kubenswrapper[4813]: I0217 09:03:39.418067 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:39 crc kubenswrapper[4813]: I0217 09:03:39.425350 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:39 crc kubenswrapper[4813]: I0217 09:03:39.458007 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:39 crc kubenswrapper[4813]: I0217 09:03:39.503456 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:39 crc kubenswrapper[4813]: I0217 09:03:39.513880 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:39 crc kubenswrapper[4813]: I0217 09:03:39.880181 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:39 crc kubenswrapper[4813]: I0217 09:03:39.890452 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:39 crc kubenswrapper[4813]: I0217 09:03:39.914853 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:39 crc kubenswrapper[4813]: I0217 09:03:39.922678 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:42 crc kubenswrapper[4813]: I0217 09:03:42.119543 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:42 crc kubenswrapper[4813]: I0217 09:03:42.120232 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="ceilometer-central-agent" containerID="cri-o://213cd82ada153f0666d477c08566b8c0d988fab96f816d5736f152b4caf9fed1" gracePeriod=30 Feb 17 09:03:42 crc kubenswrapper[4813]: I0217 09:03:42.120418 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="proxy-httpd" containerID="cri-o://62ec469cb319e062c5de55739e6bc6ad0a0fbfcdc999944560ae6bdf35155d03" gracePeriod=30 Feb 17 09:03:42 crc kubenswrapper[4813]: I0217 09:03:42.120474 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="sg-core" containerID="cri-o://b2b6c5bd177b66dadfa2fb97b314f5b396eeee81fc3e764980e2be1fcbd0281c" gracePeriod=30 Feb 17 09:03:42 crc kubenswrapper[4813]: I0217 09:03:42.120515 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="ceilometer-notification-agent" containerID="cri-o://bf31b6509d8841fc44a8a20b0afdb4f1b4bccfeb6fbd14e0231f69fbbc396f4f" gracePeriod=30 Feb 17 09:03:42 crc kubenswrapper[4813]: I0217 09:03:42.240421 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.161:3000/\": read tcp 10.217.0.2:39860->10.217.0.161:3000: read: connection reset by peer" Feb 17 09:03:42 crc kubenswrapper[4813]: I0217 09:03:42.912164 4813 generic.go:334] "Generic (PLEG): container finished" podID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerID="62ec469cb319e062c5de55739e6bc6ad0a0fbfcdc999944560ae6bdf35155d03" exitCode=0 Feb 17 09:03:42 crc kubenswrapper[4813]: I0217 09:03:42.912455 4813 generic.go:334] "Generic (PLEG): container finished" podID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerID="b2b6c5bd177b66dadfa2fb97b314f5b396eeee81fc3e764980e2be1fcbd0281c" exitCode=2 Feb 17 09:03:42 crc kubenswrapper[4813]: I0217 09:03:42.912269 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f61d979-09fb-4aac-ad39-b8de4603b2b7","Type":"ContainerDied","Data":"62ec469cb319e062c5de55739e6bc6ad0a0fbfcdc999944560ae6bdf35155d03"} Feb 17 09:03:42 crc kubenswrapper[4813]: I0217 09:03:42.912647 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f61d979-09fb-4aac-ad39-b8de4603b2b7","Type":"ContainerDied","Data":"b2b6c5bd177b66dadfa2fb97b314f5b396eeee81fc3e764980e2be1fcbd0281c"} Feb 17 09:03:43 crc kubenswrapper[4813]: I0217 09:03:43.283861 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.161:3000/\": dial tcp 10.217.0.161:3000: connect: connection refused" Feb 17 09:03:43 crc kubenswrapper[4813]: I0217 09:03:43.926036 4813 generic.go:334] "Generic (PLEG): container finished" podID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerID="213cd82ada153f0666d477c08566b8c0d988fab96f816d5736f152b4caf9fed1" exitCode=0 Feb 17 09:03:43 crc kubenswrapper[4813]: I0217 09:03:43.926130 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f61d979-09fb-4aac-ad39-b8de4603b2b7","Type":"ContainerDied","Data":"213cd82ada153f0666d477c08566b8c0d988fab96f816d5736f152b4caf9fed1"} Feb 17 09:03:43 crc kubenswrapper[4813]: I0217 09:03:43.941056 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:03:43 crc kubenswrapper[4813]: I0217 09:03:43.941397 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="35ab64fa-2456-4a42-bdf7-69664e8e5144" containerName="watcher-kuttl-api-log" containerID="cri-o://eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3" gracePeriod=30 Feb 17 09:03:43 crc kubenswrapper[4813]: I0217 09:03:43.941465 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="35ab64fa-2456-4a42-bdf7-69664e8e5144" containerName="watcher-api" containerID="cri-o://a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e" gracePeriod=30 Feb 17 09:03:44 crc kubenswrapper[4813]: I0217 09:03:44.503080 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="35ab64fa-2456-4a42-bdf7-69664e8e5144" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"https://10.217.0.167:9322/\": dial tcp 10.217.0.167:9322: connect: connection refused" Feb 17 09:03:44 crc kubenswrapper[4813]: I0217 09:03:44.503349 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="35ab64fa-2456-4a42-bdf7-69664e8e5144" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.167:9322/\": dial tcp 10.217.0.167:9322: connect: connection refused" Feb 17 09:03:44 crc kubenswrapper[4813]: I0217 09:03:44.831908 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:44 crc kubenswrapper[4813]: I0217 09:03:44.938245 4813 generic.go:334] "Generic (PLEG): container finished" podID="35ab64fa-2456-4a42-bdf7-69664e8e5144" containerID="a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e" exitCode=0 Feb 17 09:03:44 crc kubenswrapper[4813]: I0217 09:03:44.938291 4813 generic.go:334] "Generic (PLEG): container finished" podID="35ab64fa-2456-4a42-bdf7-69664e8e5144" containerID="eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3" exitCode=143 Feb 17 09:03:44 crc kubenswrapper[4813]: I0217 09:03:44.938362 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:44 crc kubenswrapper[4813]: I0217 09:03:44.938362 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"35ab64fa-2456-4a42-bdf7-69664e8e5144","Type":"ContainerDied","Data":"a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e"} Feb 17 09:03:44 crc kubenswrapper[4813]: I0217 09:03:44.938431 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"35ab64fa-2456-4a42-bdf7-69664e8e5144","Type":"ContainerDied","Data":"eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3"} Feb 17 09:03:44 crc kubenswrapper[4813]: I0217 09:03:44.938446 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"35ab64fa-2456-4a42-bdf7-69664e8e5144","Type":"ContainerDied","Data":"a23c4d7ae762c6dda2d1cfa2e506e16104fa3aae3aab020959ac975b74de1b6b"} Feb 17 09:03:44 crc kubenswrapper[4813]: I0217 09:03:44.938465 4813 scope.go:117] "RemoveContainer" containerID="a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e" Feb 17 09:03:44 crc kubenswrapper[4813]: I0217 09:03:44.987750 4813 scope.go:117] "RemoveContainer" containerID="eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.012259 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-custom-prometheus-ca\") pod \"35ab64fa-2456-4a42-bdf7-69664e8e5144\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.012428 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-config-data\") pod \"35ab64fa-2456-4a42-bdf7-69664e8e5144\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.012472 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-internal-tls-certs\") pod \"35ab64fa-2456-4a42-bdf7-69664e8e5144\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.012655 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-combined-ca-bundle\") pod \"35ab64fa-2456-4a42-bdf7-69664e8e5144\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.012725 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-public-tls-certs\") pod \"35ab64fa-2456-4a42-bdf7-69664e8e5144\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.012775 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnftp\" (UniqueName: \"kubernetes.io/projected/35ab64fa-2456-4a42-bdf7-69664e8e5144-kube-api-access-gnftp\") pod \"35ab64fa-2456-4a42-bdf7-69664e8e5144\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.012847 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35ab64fa-2456-4a42-bdf7-69664e8e5144-logs\") pod \"35ab64fa-2456-4a42-bdf7-69664e8e5144\" (UID: \"35ab64fa-2456-4a42-bdf7-69664e8e5144\") " Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.013819 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35ab64fa-2456-4a42-bdf7-69664e8e5144-logs" (OuterVolumeSpecName: "logs") pod "35ab64fa-2456-4a42-bdf7-69664e8e5144" (UID: "35ab64fa-2456-4a42-bdf7-69664e8e5144"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.015765 4813 scope.go:117] "RemoveContainer" containerID="a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e" Feb 17 09:03:45 crc kubenswrapper[4813]: E0217 09:03:45.016773 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e\": container with ID starting with a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e not found: ID does not exist" containerID="a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.016799 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e"} err="failed to get container status \"a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e\": rpc error: code = NotFound desc = could not find container \"a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e\": container with ID starting with a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e not found: ID does not exist" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.016818 4813 scope.go:117] "RemoveContainer" containerID="eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3" Feb 17 09:03:45 crc kubenswrapper[4813]: E0217 09:03:45.017174 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3\": container with ID starting with eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3 not found: ID does not exist" containerID="eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.017194 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3"} err="failed to get container status \"eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3\": rpc error: code = NotFound desc = could not find container \"eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3\": container with ID starting with eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3 not found: ID does not exist" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.017231 4813 scope.go:117] "RemoveContainer" containerID="a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.031936 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e"} err="failed to get container status \"a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e\": rpc error: code = NotFound desc = could not find container \"a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e\": container with ID starting with a0b5843174ceec1f2647b4f2fb102400a1c3c9da715122748c439842bb0bfc4e not found: ID does not exist" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.031978 4813 scope.go:117] "RemoveContainer" containerID="eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.035709 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3"} err="failed to get container status \"eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3\": rpc error: code = NotFound desc = could not find container \"eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3\": container with ID starting with eeef62af650ebb78f574770422d523aef5b6f1312f3037b8a6cdc46ac09e39e3 not found: ID does not exist" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.035854 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35ab64fa-2456-4a42-bdf7-69664e8e5144-kube-api-access-gnftp" (OuterVolumeSpecName: "kube-api-access-gnftp") pod "35ab64fa-2456-4a42-bdf7-69664e8e5144" (UID: "35ab64fa-2456-4a42-bdf7-69664e8e5144"). InnerVolumeSpecName "kube-api-access-gnftp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.037942 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "35ab64fa-2456-4a42-bdf7-69664e8e5144" (UID: "35ab64fa-2456-4a42-bdf7-69664e8e5144"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.041476 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35ab64fa-2456-4a42-bdf7-69664e8e5144" (UID: "35ab64fa-2456-4a42-bdf7-69664e8e5144"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.059455 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "35ab64fa-2456-4a42-bdf7-69664e8e5144" (UID: "35ab64fa-2456-4a42-bdf7-69664e8e5144"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.060223 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-config-data" (OuterVolumeSpecName: "config-data") pod "35ab64fa-2456-4a42-bdf7-69664e8e5144" (UID: "35ab64fa-2456-4a42-bdf7-69664e8e5144"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.065886 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "35ab64fa-2456-4a42-bdf7-69664e8e5144" (UID: "35ab64fa-2456-4a42-bdf7-69664e8e5144"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.114360 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.114391 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.114403 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnftp\" (UniqueName: \"kubernetes.io/projected/35ab64fa-2456-4a42-bdf7-69664e8e5144-kube-api-access-gnftp\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.114415 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35ab64fa-2456-4a42-bdf7-69664e8e5144-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.114425 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.114438 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.114448 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35ab64fa-2456-4a42-bdf7-69664e8e5144-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.260452 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.268189 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.321156 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:03:45 crc kubenswrapper[4813]: E0217 09:03:45.322428 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35ab64fa-2456-4a42-bdf7-69664e8e5144" containerName="watcher-kuttl-api-log" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.322455 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ab64fa-2456-4a42-bdf7-69664e8e5144" containerName="watcher-kuttl-api-log" Feb 17 09:03:45 crc kubenswrapper[4813]: E0217 09:03:45.322491 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35ab64fa-2456-4a42-bdf7-69664e8e5144" containerName="watcher-api" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.322497 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ab64fa-2456-4a42-bdf7-69664e8e5144" containerName="watcher-api" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.322810 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="35ab64fa-2456-4a42-bdf7-69664e8e5144" containerName="watcher-api" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.322850 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="35ab64fa-2456-4a42-bdf7-69664e8e5144" containerName="watcher-kuttl-api-log" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.324421 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.328120 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.328505 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.335207 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.339465 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.521834 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.522334 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.522363 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.522383 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64z2v\" (UniqueName: \"kubernetes.io/projected/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-kube-api-access-64z2v\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.522403 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.522436 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.522480 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.623560 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.623642 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.623698 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.623723 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.623748 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.623772 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64z2v\" (UniqueName: \"kubernetes.io/projected/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-kube-api-access-64z2v\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.623790 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.624336 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.628877 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.628988 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.629242 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.630852 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.631132 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.644051 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64z2v\" (UniqueName: \"kubernetes.io/projected/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-kube-api-access-64z2v\") pod \"watcher-kuttl-api-0\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:45 crc kubenswrapper[4813]: I0217 09:03:45.675856 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:46 crc kubenswrapper[4813]: W0217 09:03:46.135282 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e2e1ab_9f4e_4021_ab40_cc7f173991ab.slice/crio-024c091fd6a4996660ac5e952ca8fcef5644ed7bc21269dcb04f70d4a5ad1371 WatchSource:0}: Error finding container 024c091fd6a4996660ac5e952ca8fcef5644ed7bc21269dcb04f70d4a5ad1371: Status 404 returned error can't find the container with id 024c091fd6a4996660ac5e952ca8fcef5644ed7bc21269dcb04f70d4a5ad1371 Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.146680 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.751663 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.857861 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-sg-core-conf-yaml\") pod \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.857976 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-scripts\") pod \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.858070 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-ceilometer-tls-certs\") pod \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.858177 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-config-data\") pod \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.858752 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f61d979-09fb-4aac-ad39-b8de4603b2b7-run-httpd\") pod \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.858837 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f61d979-09fb-4aac-ad39-b8de4603b2b7-log-httpd\") pod \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.858862 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-combined-ca-bundle\") pod \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.858891 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5rxb\" (UniqueName: \"kubernetes.io/projected/4f61d979-09fb-4aac-ad39-b8de4603b2b7-kube-api-access-h5rxb\") pod \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\" (UID: \"4f61d979-09fb-4aac-ad39-b8de4603b2b7\") " Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.860038 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f61d979-09fb-4aac-ad39-b8de4603b2b7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4f61d979-09fb-4aac-ad39-b8de4603b2b7" (UID: "4f61d979-09fb-4aac-ad39-b8de4603b2b7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.860197 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f61d979-09fb-4aac-ad39-b8de4603b2b7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4f61d979-09fb-4aac-ad39-b8de4603b2b7" (UID: "4f61d979-09fb-4aac-ad39-b8de4603b2b7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.862428 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f61d979-09fb-4aac-ad39-b8de4603b2b7-kube-api-access-h5rxb" (OuterVolumeSpecName: "kube-api-access-h5rxb") pod "4f61d979-09fb-4aac-ad39-b8de4603b2b7" (UID: "4f61d979-09fb-4aac-ad39-b8de4603b2b7"). InnerVolumeSpecName "kube-api-access-h5rxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.862436 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-scripts" (OuterVolumeSpecName: "scripts") pod "4f61d979-09fb-4aac-ad39-b8de4603b2b7" (UID: "4f61d979-09fb-4aac-ad39-b8de4603b2b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.880627 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4f61d979-09fb-4aac-ad39-b8de4603b2b7" (UID: "4f61d979-09fb-4aac-ad39-b8de4603b2b7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.902274 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4f61d979-09fb-4aac-ad39-b8de4603b2b7" (UID: "4f61d979-09fb-4aac-ad39-b8de4603b2b7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.922514 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f61d979-09fb-4aac-ad39-b8de4603b2b7" (UID: "4f61d979-09fb-4aac-ad39-b8de4603b2b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.952385 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-config-data" (OuterVolumeSpecName: "config-data") pod "4f61d979-09fb-4aac-ad39-b8de4603b2b7" (UID: "4f61d979-09fb-4aac-ad39-b8de4603b2b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.960572 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.960604 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f61d979-09fb-4aac-ad39-b8de4603b2b7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.960618 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f61d979-09fb-4aac-ad39-b8de4603b2b7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.960630 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.960643 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5rxb\" (UniqueName: \"kubernetes.io/projected/4f61d979-09fb-4aac-ad39-b8de4603b2b7-kube-api-access-h5rxb\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.960654 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.960665 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.960677 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f61d979-09fb-4aac-ad39-b8de4603b2b7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.962714 4813 generic.go:334] "Generic (PLEG): container finished" podID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerID="bf31b6509d8841fc44a8a20b0afdb4f1b4bccfeb6fbd14e0231f69fbbc396f4f" exitCode=0 Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.962873 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.963413 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f61d979-09fb-4aac-ad39-b8de4603b2b7","Type":"ContainerDied","Data":"bf31b6509d8841fc44a8a20b0afdb4f1b4bccfeb6fbd14e0231f69fbbc396f4f"} Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.963461 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f61d979-09fb-4aac-ad39-b8de4603b2b7","Type":"ContainerDied","Data":"ee86c8851d6a77bb982a1b74d9d9131c15fd7788d9176a27255b026b30f6e54f"} Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.963486 4813 scope.go:117] "RemoveContainer" containerID="62ec469cb319e062c5de55739e6bc6ad0a0fbfcdc999944560ae6bdf35155d03" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.970396 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab","Type":"ContainerStarted","Data":"8a79990a1519da57eee2a488fa7869441b79a38b5db437833d9f0fbdd02edeb5"} Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.970437 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab","Type":"ContainerStarted","Data":"7a8180c1feeaa350d170d8f82d840987de70ebdb6123f3f8b69dada3b3729317"} Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.970452 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab","Type":"ContainerStarted","Data":"024c091fd6a4996660ac5e952ca8fcef5644ed7bc21269dcb04f70d4a5ad1371"} Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.972035 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:46 crc kubenswrapper[4813]: I0217 09:03:46.992546 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.992523626 podStartE2EDuration="1.992523626s" podCreationTimestamp="2026-02-17 09:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:03:46.988520132 +0000 UTC m=+1374.649281355" watchObservedRunningTime="2026-02-17 09:03:46.992523626 +0000 UTC m=+1374.653284859" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:46.999682 4813 scope.go:117] "RemoveContainer" containerID="b2b6c5bd177b66dadfa2fb97b314f5b396eeee81fc3e764980e2be1fcbd0281c" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.012714 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.027705 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.037262 4813 scope.go:117] "RemoveContainer" containerID="bf31b6509d8841fc44a8a20b0afdb4f1b4bccfeb6fbd14e0231f69fbbc396f4f" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.044192 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:47 crc kubenswrapper[4813]: E0217 09:03:47.044509 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="ceilometer-notification-agent" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.044528 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="ceilometer-notification-agent" Feb 17 09:03:47 crc kubenswrapper[4813]: E0217 09:03:47.044540 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="sg-core" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.044546 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="sg-core" Feb 17 09:03:47 crc kubenswrapper[4813]: E0217 09:03:47.044558 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="ceilometer-central-agent" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.044564 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="ceilometer-central-agent" Feb 17 09:03:47 crc kubenswrapper[4813]: E0217 09:03:47.044575 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="proxy-httpd" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.044582 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="proxy-httpd" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.044724 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="ceilometer-notification-agent" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.044735 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="proxy-httpd" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.044745 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="ceilometer-central-agent" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.044766 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" containerName="sg-core" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.048723 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.053192 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.053435 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.053574 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.061429 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3d84905-781b-476c-8ba3-046c5b0a96f5-log-httpd\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.061474 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.061498 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3d84905-781b-476c-8ba3-046c5b0a96f5-run-httpd\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.061532 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crcjf\" (UniqueName: \"kubernetes.io/projected/d3d84905-781b-476c-8ba3-046c5b0a96f5-kube-api-access-crcjf\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.061571 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-scripts\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.061589 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.061610 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-config-data\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.061718 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.063528 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.126653 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35ab64fa-2456-4a42-bdf7-69664e8e5144" path="/var/lib/kubelet/pods/35ab64fa-2456-4a42-bdf7-69664e8e5144/volumes" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.127413 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f61d979-09fb-4aac-ad39-b8de4603b2b7" path="/var/lib/kubelet/pods/4f61d979-09fb-4aac-ad39-b8de4603b2b7/volumes" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.134288 4813 scope.go:117] "RemoveContainer" containerID="213cd82ada153f0666d477c08566b8c0d988fab96f816d5736f152b4caf9fed1" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.154484 4813 scope.go:117] "RemoveContainer" containerID="62ec469cb319e062c5de55739e6bc6ad0a0fbfcdc999944560ae6bdf35155d03" Feb 17 09:03:47 crc kubenswrapper[4813]: E0217 09:03:47.155535 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62ec469cb319e062c5de55739e6bc6ad0a0fbfcdc999944560ae6bdf35155d03\": container with ID starting with 62ec469cb319e062c5de55739e6bc6ad0a0fbfcdc999944560ae6bdf35155d03 not found: ID does not exist" containerID="62ec469cb319e062c5de55739e6bc6ad0a0fbfcdc999944560ae6bdf35155d03" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.155585 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ec469cb319e062c5de55739e6bc6ad0a0fbfcdc999944560ae6bdf35155d03"} err="failed to get container status \"62ec469cb319e062c5de55739e6bc6ad0a0fbfcdc999944560ae6bdf35155d03\": rpc error: code = NotFound desc = could not find container \"62ec469cb319e062c5de55739e6bc6ad0a0fbfcdc999944560ae6bdf35155d03\": container with ID starting with 62ec469cb319e062c5de55739e6bc6ad0a0fbfcdc999944560ae6bdf35155d03 not found: ID does not exist" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.155619 4813 scope.go:117] "RemoveContainer" containerID="b2b6c5bd177b66dadfa2fb97b314f5b396eeee81fc3e764980e2be1fcbd0281c" Feb 17 09:03:47 crc kubenswrapper[4813]: E0217 09:03:47.155976 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2b6c5bd177b66dadfa2fb97b314f5b396eeee81fc3e764980e2be1fcbd0281c\": container with ID starting with b2b6c5bd177b66dadfa2fb97b314f5b396eeee81fc3e764980e2be1fcbd0281c not found: ID does not exist" containerID="b2b6c5bd177b66dadfa2fb97b314f5b396eeee81fc3e764980e2be1fcbd0281c" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.156033 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2b6c5bd177b66dadfa2fb97b314f5b396eeee81fc3e764980e2be1fcbd0281c"} err="failed to get container status \"b2b6c5bd177b66dadfa2fb97b314f5b396eeee81fc3e764980e2be1fcbd0281c\": rpc error: code = NotFound desc = could not find container \"b2b6c5bd177b66dadfa2fb97b314f5b396eeee81fc3e764980e2be1fcbd0281c\": container with ID starting with b2b6c5bd177b66dadfa2fb97b314f5b396eeee81fc3e764980e2be1fcbd0281c not found: ID does not exist" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.156063 4813 scope.go:117] "RemoveContainer" containerID="bf31b6509d8841fc44a8a20b0afdb4f1b4bccfeb6fbd14e0231f69fbbc396f4f" Feb 17 09:03:47 crc kubenswrapper[4813]: E0217 09:03:47.156297 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf31b6509d8841fc44a8a20b0afdb4f1b4bccfeb6fbd14e0231f69fbbc396f4f\": container with ID starting with bf31b6509d8841fc44a8a20b0afdb4f1b4bccfeb6fbd14e0231f69fbbc396f4f not found: ID does not exist" containerID="bf31b6509d8841fc44a8a20b0afdb4f1b4bccfeb6fbd14e0231f69fbbc396f4f" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.156334 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf31b6509d8841fc44a8a20b0afdb4f1b4bccfeb6fbd14e0231f69fbbc396f4f"} err="failed to get container status \"bf31b6509d8841fc44a8a20b0afdb4f1b4bccfeb6fbd14e0231f69fbbc396f4f\": rpc error: code = NotFound desc = could not find container \"bf31b6509d8841fc44a8a20b0afdb4f1b4bccfeb6fbd14e0231f69fbbc396f4f\": container with ID starting with bf31b6509d8841fc44a8a20b0afdb4f1b4bccfeb6fbd14e0231f69fbbc396f4f not found: ID does not exist" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.156354 4813 scope.go:117] "RemoveContainer" containerID="213cd82ada153f0666d477c08566b8c0d988fab96f816d5736f152b4caf9fed1" Feb 17 09:03:47 crc kubenswrapper[4813]: E0217 09:03:47.156602 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"213cd82ada153f0666d477c08566b8c0d988fab96f816d5736f152b4caf9fed1\": container with ID starting with 213cd82ada153f0666d477c08566b8c0d988fab96f816d5736f152b4caf9fed1 not found: ID does not exist" containerID="213cd82ada153f0666d477c08566b8c0d988fab96f816d5736f152b4caf9fed1" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.156627 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"213cd82ada153f0666d477c08566b8c0d988fab96f816d5736f152b4caf9fed1"} err="failed to get container status \"213cd82ada153f0666d477c08566b8c0d988fab96f816d5736f152b4caf9fed1\": rpc error: code = NotFound desc = could not find container \"213cd82ada153f0666d477c08566b8c0d988fab96f816d5736f152b4caf9fed1\": container with ID starting with 213cd82ada153f0666d477c08566b8c0d988fab96f816d5736f152b4caf9fed1 not found: ID does not exist" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.162976 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.163037 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3d84905-781b-476c-8ba3-046c5b0a96f5-log-httpd\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.163067 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.163120 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3d84905-781b-476c-8ba3-046c5b0a96f5-run-httpd\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.163158 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crcjf\" (UniqueName: \"kubernetes.io/projected/d3d84905-781b-476c-8ba3-046c5b0a96f5-kube-api-access-crcjf\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.163231 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-scripts\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.163279 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.163344 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-config-data\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.163760 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3d84905-781b-476c-8ba3-046c5b0a96f5-run-httpd\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.164109 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3d84905-781b-476c-8ba3-046c5b0a96f5-log-httpd\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.169081 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.169282 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.169425 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.169537 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-config-data\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.169563 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-scripts\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.194744 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crcjf\" (UniqueName: \"kubernetes.io/projected/d3d84905-781b-476c-8ba3-046c5b0a96f5-kube-api-access-crcjf\") pod \"ceilometer-0\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.434438 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.915283 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:47 crc kubenswrapper[4813]: I0217 09:03:47.979280 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3d84905-781b-476c-8ba3-046c5b0a96f5","Type":"ContainerStarted","Data":"512dd6c354d1769a64bc4e8d28ebf815d935f209b43b95dfb2bb6efc23cca61c"} Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.473566 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-278nl"] Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.490163 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-278nl"] Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.521355 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherf5e5-account-delete-vdzqw"] Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.522698 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherf5e5-account-delete-vdzqw" Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.540137 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherf5e5-account-delete-vdzqw"] Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.609580 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.609784 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="8131fc07-9b89-45bb-bebb-a1630a6120af" containerName="watcher-decision-engine" containerID="cri-o://e388fe6699eb05965726d928b29d87cc57839d1d600c33bcb57b7a5f6f978e26" gracePeriod=30 Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.626143 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46514e64-cce7-49de-ab58-06aafff822c8-operator-scripts\") pod \"watcherf5e5-account-delete-vdzqw\" (UID: \"46514e64-cce7-49de-ab58-06aafff822c8\") " pod="watcher-kuttl-default/watcherf5e5-account-delete-vdzqw" Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.626450 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96nhp\" (UniqueName: \"kubernetes.io/projected/46514e64-cce7-49de-ab58-06aafff822c8-kube-api-access-96nhp\") pod \"watcherf5e5-account-delete-vdzqw\" (UID: \"46514e64-cce7-49de-ab58-06aafff822c8\") " pod="watcher-kuttl-default/watcherf5e5-account-delete-vdzqw" Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.641678 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.673568 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.673793 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3" containerName="watcher-applier" containerID="cri-o://089070ddc59d14ba7b6ef59717317b103a0cb00150ca5dc217f9428f17b92c5a" gracePeriod=30 Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.729403 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46514e64-cce7-49de-ab58-06aafff822c8-operator-scripts\") pod \"watcherf5e5-account-delete-vdzqw\" (UID: \"46514e64-cce7-49de-ab58-06aafff822c8\") " pod="watcher-kuttl-default/watcherf5e5-account-delete-vdzqw" Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.729458 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96nhp\" (UniqueName: \"kubernetes.io/projected/46514e64-cce7-49de-ab58-06aafff822c8-kube-api-access-96nhp\") pod \"watcherf5e5-account-delete-vdzqw\" (UID: \"46514e64-cce7-49de-ab58-06aafff822c8\") " pod="watcher-kuttl-default/watcherf5e5-account-delete-vdzqw" Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.730361 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46514e64-cce7-49de-ab58-06aafff822c8-operator-scripts\") pod \"watcherf5e5-account-delete-vdzqw\" (UID: \"46514e64-cce7-49de-ab58-06aafff822c8\") " pod="watcher-kuttl-default/watcherf5e5-account-delete-vdzqw" Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.749515 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96nhp\" (UniqueName: \"kubernetes.io/projected/46514e64-cce7-49de-ab58-06aafff822c8-kube-api-access-96nhp\") pod \"watcherf5e5-account-delete-vdzqw\" (UID: \"46514e64-cce7-49de-ab58-06aafff822c8\") " pod="watcher-kuttl-default/watcherf5e5-account-delete-vdzqw" Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.858876 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherf5e5-account-delete-vdzqw" Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.998758 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.999587 4813 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-api-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-2jcjs\" not found" Feb 17 09:03:48 crc kubenswrapper[4813]: I0217 09:03:48.999962 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3d84905-781b-476c-8ba3-046c5b0a96f5","Type":"ContainerStarted","Data":"1fbd2c42922414c58e145012897c3868c17434fb873dfa85a50982a87e2bbd4f"} Feb 17 09:03:49 crc kubenswrapper[4813]: I0217 09:03:49.123506 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="564a261f-8d72-4cce-9565-ff66263ae004" path="/var/lib/kubelet/pods/564a261f-8d72-4cce-9565-ff66263ae004/volumes" Feb 17 09:03:49 crc kubenswrapper[4813]: E0217 09:03:49.147604 4813 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Feb 17 09:03:49 crc kubenswrapper[4813]: E0217 09:03:49.147692 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-config-data podName:f6e2e1ab-9f4e-4021-ab40-cc7f173991ab nodeName:}" failed. No retries permitted until 2026-02-17 09:03:49.64766887 +0000 UTC m=+1377.308430093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-config-data") pod "watcher-kuttl-api-0" (UID: "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab") : secret "watcher-kuttl-api-config-data" not found Feb 17 09:03:49 crc kubenswrapper[4813]: I0217 09:03:49.407056 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherf5e5-account-delete-vdzqw"] Feb 17 09:03:49 crc kubenswrapper[4813]: E0217 09:03:49.427141 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="089070ddc59d14ba7b6ef59717317b103a0cb00150ca5dc217f9428f17b92c5a" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:03:49 crc kubenswrapper[4813]: E0217 09:03:49.430425 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="089070ddc59d14ba7b6ef59717317b103a0cb00150ca5dc217f9428f17b92c5a" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:03:49 crc kubenswrapper[4813]: E0217 09:03:49.431459 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="089070ddc59d14ba7b6ef59717317b103a0cb00150ca5dc217f9428f17b92c5a" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:03:49 crc kubenswrapper[4813]: E0217 09:03:49.431491 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3" containerName="watcher-applier" Feb 17 09:03:49 crc kubenswrapper[4813]: I0217 09:03:49.496691 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:49 crc kubenswrapper[4813]: E0217 09:03:49.658647 4813 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Feb 17 09:03:49 crc kubenswrapper[4813]: E0217 09:03:49.658712 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-config-data podName:f6e2e1ab-9f4e-4021-ab40-cc7f173991ab nodeName:}" failed. No retries permitted until 2026-02-17 09:03:50.658697789 +0000 UTC m=+1378.319459012 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-config-data") pod "watcher-kuttl-api-0" (UID: "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab") : secret "watcher-kuttl-api-config-data" not found Feb 17 09:03:50 crc kubenswrapper[4813]: I0217 09:03:50.010216 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3d84905-781b-476c-8ba3-046c5b0a96f5","Type":"ContainerStarted","Data":"998b10a664f24e4dce2acd8350255c9cadc49408c6235216d1cfc538de15419b"} Feb 17 09:03:50 crc kubenswrapper[4813]: I0217 09:03:50.010256 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3d84905-781b-476c-8ba3-046c5b0a96f5","Type":"ContainerStarted","Data":"f2938d69dbfd01aeb009236b3adcdf4f1d172cad51fa0cd88ca70430a9022195"} Feb 17 09:03:50 crc kubenswrapper[4813]: I0217 09:03:50.012541 4813 generic.go:334] "Generic (PLEG): container finished" podID="46514e64-cce7-49de-ab58-06aafff822c8" containerID="7ef8b35dfbef639f7e9aea084a582cb7398265ae67d027dff7bd399a90e8ae1d" exitCode=0 Feb 17 09:03:50 crc kubenswrapper[4813]: I0217 09:03:50.012591 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherf5e5-account-delete-vdzqw" event={"ID":"46514e64-cce7-49de-ab58-06aafff822c8","Type":"ContainerDied","Data":"7ef8b35dfbef639f7e9aea084a582cb7398265ae67d027dff7bd399a90e8ae1d"} Feb 17 09:03:50 crc kubenswrapper[4813]: I0217 09:03:50.012891 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherf5e5-account-delete-vdzqw" event={"ID":"46514e64-cce7-49de-ab58-06aafff822c8","Type":"ContainerStarted","Data":"a63861f5c8b1bf7cccaf11f3cd4a62c3247bd7db331422c45740ad1c4018c574"} Feb 17 09:03:50 crc kubenswrapper[4813]: I0217 09:03:50.013036 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" containerName="watcher-kuttl-api-log" containerID="cri-o://7a8180c1feeaa350d170d8f82d840987de70ebdb6123f3f8b69dada3b3729317" gracePeriod=30 Feb 17 09:03:50 crc kubenswrapper[4813]: I0217 09:03:50.013122 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" containerName="watcher-api" containerID="cri-o://8a79990a1519da57eee2a488fa7869441b79a38b5db437833d9f0fbdd02edeb5" gracePeriod=30 Feb 17 09:03:50 crc kubenswrapper[4813]: I0217 09:03:50.019921 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.168:9322/\": EOF" Feb 17 09:03:50 crc kubenswrapper[4813]: I0217 09:03:50.677381 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:50 crc kubenswrapper[4813]: E0217 09:03:50.698202 4813 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Feb 17 09:03:50 crc kubenswrapper[4813]: E0217 09:03:50.698266 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-config-data podName:f6e2e1ab-9f4e-4021-ab40-cc7f173991ab nodeName:}" failed. No retries permitted until 2026-02-17 09:03:52.698251295 +0000 UTC m=+1380.359012518 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-config-data") pod "watcher-kuttl-api-0" (UID: "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab") : secret "watcher-kuttl-api-config-data" not found Feb 17 09:03:50 crc kubenswrapper[4813]: I0217 09:03:50.946158 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.030923 4813 generic.go:334] "Generic (PLEG): container finished" podID="f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" containerID="7a8180c1feeaa350d170d8f82d840987de70ebdb6123f3f8b69dada3b3729317" exitCode=143 Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.031164 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab","Type":"ContainerDied","Data":"7a8180c1feeaa350d170d8f82d840987de70ebdb6123f3f8b69dada3b3729317"} Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.396856 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherf5e5-account-delete-vdzqw" Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.509139 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46514e64-cce7-49de-ab58-06aafff822c8-operator-scripts\") pod \"46514e64-cce7-49de-ab58-06aafff822c8\" (UID: \"46514e64-cce7-49de-ab58-06aafff822c8\") " Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.509230 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96nhp\" (UniqueName: \"kubernetes.io/projected/46514e64-cce7-49de-ab58-06aafff822c8-kube-api-access-96nhp\") pod \"46514e64-cce7-49de-ab58-06aafff822c8\" (UID: \"46514e64-cce7-49de-ab58-06aafff822c8\") " Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.510507 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46514e64-cce7-49de-ab58-06aafff822c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46514e64-cce7-49de-ab58-06aafff822c8" (UID: "46514e64-cce7-49de-ab58-06aafff822c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.515562 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46514e64-cce7-49de-ab58-06aafff822c8-kube-api-access-96nhp" (OuterVolumeSpecName: "kube-api-access-96nhp") pod "46514e64-cce7-49de-ab58-06aafff822c8" (UID: "46514e64-cce7-49de-ab58-06aafff822c8"). InnerVolumeSpecName "kube-api-access-96nhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.544638 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.610139 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-config-data\") pod \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\" (UID: \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\") " Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.610220 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv67b\" (UniqueName: \"kubernetes.io/projected/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-kube-api-access-tv67b\") pod \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\" (UID: \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\") " Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.610292 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-logs\") pod \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\" (UID: \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\") " Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.610326 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-combined-ca-bundle\") pod \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\" (UID: \"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3\") " Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.610662 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-logs" (OuterVolumeSpecName: "logs") pod "ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3" (UID: "ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.610687 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46514e64-cce7-49de-ab58-06aafff822c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.610735 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96nhp\" (UniqueName: \"kubernetes.io/projected/46514e64-cce7-49de-ab58-06aafff822c8-kube-api-access-96nhp\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.711936 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.894680 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-kube-api-access-tv67b" (OuterVolumeSpecName: "kube-api-access-tv67b") pod "ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3" (UID: "ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3"). InnerVolumeSpecName "kube-api-access-tv67b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.899927 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3" (UID: "ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.914151 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv67b\" (UniqueName: \"kubernetes.io/projected/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-kube-api-access-tv67b\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.914179 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:51 crc kubenswrapper[4813]: I0217 09:03:51.917421 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-config-data" (OuterVolumeSpecName: "config-data") pod "ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3" (UID: "ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.016216 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.046373 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3d84905-781b-476c-8ba3-046c5b0a96f5","Type":"ContainerStarted","Data":"936faa10f6eeb132451d190ac8b8b604c6066bb4ea7963ec98ff2a35fc097c0c"} Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.046480 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerName="ceilometer-central-agent" containerID="cri-o://1fbd2c42922414c58e145012897c3868c17434fb873dfa85a50982a87e2bbd4f" gracePeriod=30 Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.046587 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerName="proxy-httpd" containerID="cri-o://936faa10f6eeb132451d190ac8b8b604c6066bb4ea7963ec98ff2a35fc097c0c" gracePeriod=30 Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.046622 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerName="sg-core" containerID="cri-o://998b10a664f24e4dce2acd8350255c9cadc49408c6235216d1cfc538de15419b" gracePeriod=30 Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.046651 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerName="ceilometer-notification-agent" containerID="cri-o://f2938d69dbfd01aeb009236b3adcdf4f1d172cad51fa0cd88ca70430a9022195" gracePeriod=30 Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.046774 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.059916 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherf5e5-account-delete-vdzqw" event={"ID":"46514e64-cce7-49de-ab58-06aafff822c8","Type":"ContainerDied","Data":"a63861f5c8b1bf7cccaf11f3cd4a62c3247bd7db331422c45740ad1c4018c574"} Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.059952 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a63861f5c8b1bf7cccaf11f3cd4a62c3247bd7db331422c45740ad1c4018c574" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.060019 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherf5e5-account-delete-vdzqw" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.070538 4813 generic.go:334] "Generic (PLEG): container finished" podID="ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3" containerID="089070ddc59d14ba7b6ef59717317b103a0cb00150ca5dc217f9428f17b92c5a" exitCode=0 Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.070599 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3","Type":"ContainerDied","Data":"089070ddc59d14ba7b6ef59717317b103a0cb00150ca5dc217f9428f17b92c5a"} Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.070623 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3","Type":"ContainerDied","Data":"6ebb037d4b6a29466e8541e409283e78928b39f221a58f15321116cc21ef8a11"} Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.070640 4813 scope.go:117] "RemoveContainer" containerID="089070ddc59d14ba7b6ef59717317b103a0cb00150ca5dc217f9428f17b92c5a" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.070754 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.079520 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.014975284 podStartE2EDuration="5.079502602s" podCreationTimestamp="2026-02-17 09:03:47 +0000 UTC" firstStartedPulling="2026-02-17 09:03:47.921948218 +0000 UTC m=+1375.582709441" lastFinishedPulling="2026-02-17 09:03:50.986475536 +0000 UTC m=+1378.647236759" observedRunningTime="2026-02-17 09:03:52.071971198 +0000 UTC m=+1379.732732421" watchObservedRunningTime="2026-02-17 09:03:52.079502602 +0000 UTC m=+1379.740263825" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.087548 4813 generic.go:334] "Generic (PLEG): container finished" podID="8131fc07-9b89-45bb-bebb-a1630a6120af" containerID="e388fe6699eb05965726d928b29d87cc57839d1d600c33bcb57b7a5f6f978e26" exitCode=0 Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.087588 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8131fc07-9b89-45bb-bebb-a1630a6120af","Type":"ContainerDied","Data":"e388fe6699eb05965726d928b29d87cc57839d1d600c33bcb57b7a5f6f978e26"} Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.123436 4813 scope.go:117] "RemoveContainer" containerID="089070ddc59d14ba7b6ef59717317b103a0cb00150ca5dc217f9428f17b92c5a" Feb 17 09:03:52 crc kubenswrapper[4813]: E0217 09:03:52.129039 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"089070ddc59d14ba7b6ef59717317b103a0cb00150ca5dc217f9428f17b92c5a\": container with ID starting with 089070ddc59d14ba7b6ef59717317b103a0cb00150ca5dc217f9428f17b92c5a not found: ID does not exist" containerID="089070ddc59d14ba7b6ef59717317b103a0cb00150ca5dc217f9428f17b92c5a" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.129083 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"089070ddc59d14ba7b6ef59717317b103a0cb00150ca5dc217f9428f17b92c5a"} err="failed to get container status \"089070ddc59d14ba7b6ef59717317b103a0cb00150ca5dc217f9428f17b92c5a\": rpc error: code = NotFound desc = could not find container \"089070ddc59d14ba7b6ef59717317b103a0cb00150ca5dc217f9428f17b92c5a\": container with ID starting with 089070ddc59d14ba7b6ef59717317b103a0cb00150ca5dc217f9428f17b92c5a not found: ID does not exist" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.144548 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.153206 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.351282 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.422780 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-combined-ca-bundle\") pod \"8131fc07-9b89-45bb-bebb-a1630a6120af\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.422892 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-config-data\") pod \"8131fc07-9b89-45bb-bebb-a1630a6120af\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.422917 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8131fc07-9b89-45bb-bebb-a1630a6120af-logs\") pod \"8131fc07-9b89-45bb-bebb-a1630a6120af\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.422940 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-custom-prometheus-ca\") pod \"8131fc07-9b89-45bb-bebb-a1630a6120af\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.423016 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jtdv\" (UniqueName: \"kubernetes.io/projected/8131fc07-9b89-45bb-bebb-a1630a6120af-kube-api-access-4jtdv\") pod \"8131fc07-9b89-45bb-bebb-a1630a6120af\" (UID: \"8131fc07-9b89-45bb-bebb-a1630a6120af\") " Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.423668 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8131fc07-9b89-45bb-bebb-a1630a6120af-logs" (OuterVolumeSpecName: "logs") pod "8131fc07-9b89-45bb-bebb-a1630a6120af" (UID: "8131fc07-9b89-45bb-bebb-a1630a6120af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.447547 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8131fc07-9b89-45bb-bebb-a1630a6120af-kube-api-access-4jtdv" (OuterVolumeSpecName: "kube-api-access-4jtdv") pod "8131fc07-9b89-45bb-bebb-a1630a6120af" (UID: "8131fc07-9b89-45bb-bebb-a1630a6120af"). InnerVolumeSpecName "kube-api-access-4jtdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.480661 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8131fc07-9b89-45bb-bebb-a1630a6120af" (UID: "8131fc07-9b89-45bb-bebb-a1630a6120af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.488196 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8131fc07-9b89-45bb-bebb-a1630a6120af" (UID: "8131fc07-9b89-45bb-bebb-a1630a6120af"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.517455 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-config-data" (OuterVolumeSpecName: "config-data") pod "8131fc07-9b89-45bb-bebb-a1630a6120af" (UID: "8131fc07-9b89-45bb-bebb-a1630a6120af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.525294 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jtdv\" (UniqueName: \"kubernetes.io/projected/8131fc07-9b89-45bb-bebb-a1630a6120af-kube-api-access-4jtdv\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.525341 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.525351 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.525358 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8131fc07-9b89-45bb-bebb-a1630a6120af-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.525366 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8131fc07-9b89-45bb-bebb-a1630a6120af-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:52 crc kubenswrapper[4813]: E0217 09:03:52.728039 4813 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Feb 17 09:03:52 crc kubenswrapper[4813]: E0217 09:03:52.728139 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-config-data podName:f6e2e1ab-9f4e-4021-ab40-cc7f173991ab nodeName:}" failed. No retries permitted until 2026-02-17 09:03:56.728115376 +0000 UTC m=+1384.388876599 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-config-data") pod "watcher-kuttl-api-0" (UID: "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab") : secret "watcher-kuttl-api-config-data" not found Feb 17 09:03:52 crc kubenswrapper[4813]: I0217 09:03:52.815835 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.168:9322/\": read tcp 10.217.0.2:55482->10.217.0.168:9322: read: connection reset by peer" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.099988 4813 generic.go:334] "Generic (PLEG): container finished" podID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerID="936faa10f6eeb132451d190ac8b8b604c6066bb4ea7963ec98ff2a35fc097c0c" exitCode=0 Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.100023 4813 generic.go:334] "Generic (PLEG): container finished" podID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerID="998b10a664f24e4dce2acd8350255c9cadc49408c6235216d1cfc538de15419b" exitCode=2 Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.100034 4813 generic.go:334] "Generic (PLEG): container finished" podID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerID="f2938d69dbfd01aeb009236b3adcdf4f1d172cad51fa0cd88ca70430a9022195" exitCode=0 Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.100043 4813 generic.go:334] "Generic (PLEG): container finished" podID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerID="1fbd2c42922414c58e145012897c3868c17434fb873dfa85a50982a87e2bbd4f" exitCode=0 Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.100088 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3d84905-781b-476c-8ba3-046c5b0a96f5","Type":"ContainerDied","Data":"936faa10f6eeb132451d190ac8b8b604c6066bb4ea7963ec98ff2a35fc097c0c"} Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.100119 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3d84905-781b-476c-8ba3-046c5b0a96f5","Type":"ContainerDied","Data":"998b10a664f24e4dce2acd8350255c9cadc49408c6235216d1cfc538de15419b"} Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.100132 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3d84905-781b-476c-8ba3-046c5b0a96f5","Type":"ContainerDied","Data":"f2938d69dbfd01aeb009236b3adcdf4f1d172cad51fa0cd88ca70430a9022195"} Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.100144 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3d84905-781b-476c-8ba3-046c5b0a96f5","Type":"ContainerDied","Data":"1fbd2c42922414c58e145012897c3868c17434fb873dfa85a50982a87e2bbd4f"} Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.104449 4813 generic.go:334] "Generic (PLEG): container finished" podID="f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" containerID="8a79990a1519da57eee2a488fa7869441b79a38b5db437833d9f0fbdd02edeb5" exitCode=0 Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.104508 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab","Type":"ContainerDied","Data":"8a79990a1519da57eee2a488fa7869441b79a38b5db437833d9f0fbdd02edeb5"} Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.108013 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8131fc07-9b89-45bb-bebb-a1630a6120af","Type":"ContainerDied","Data":"fdd674f513761c382248a325049e6e5a971ec89593f24abdb7ae531af3b75ac4"} Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.108055 4813 scope.go:117] "RemoveContainer" containerID="e388fe6699eb05965726d928b29d87cc57839d1d600c33bcb57b7a5f6f978e26" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.108071 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.138441 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3" path="/var/lib/kubelet/pods/ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3/volumes" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.186577 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.233797 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.402363 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.410414 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.438195 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crcjf\" (UniqueName: \"kubernetes.io/projected/d3d84905-781b-476c-8ba3-046c5b0a96f5-kube-api-access-crcjf\") pod \"d3d84905-781b-476c-8ba3-046c5b0a96f5\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.438253 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-config-data\") pod \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.438274 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-custom-prometheus-ca\") pod \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.438324 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-scripts\") pod \"d3d84905-781b-476c-8ba3-046c5b0a96f5\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.438352 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-combined-ca-bundle\") pod \"d3d84905-781b-476c-8ba3-046c5b0a96f5\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.438375 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-public-tls-certs\") pod \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.438411 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-ceilometer-tls-certs\") pod \"d3d84905-781b-476c-8ba3-046c5b0a96f5\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.438433 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3d84905-781b-476c-8ba3-046c5b0a96f5-run-httpd\") pod \"d3d84905-781b-476c-8ba3-046c5b0a96f5\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.438460 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64z2v\" (UniqueName: \"kubernetes.io/projected/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-kube-api-access-64z2v\") pod \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.438525 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-logs\") pod \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.438550 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-internal-tls-certs\") pod \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.438576 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-sg-core-conf-yaml\") pod \"d3d84905-781b-476c-8ba3-046c5b0a96f5\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.438596 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-config-data\") pod \"d3d84905-781b-476c-8ba3-046c5b0a96f5\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.438622 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3d84905-781b-476c-8ba3-046c5b0a96f5-log-httpd\") pod \"d3d84905-781b-476c-8ba3-046c5b0a96f5\" (UID: \"d3d84905-781b-476c-8ba3-046c5b0a96f5\") " Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.438652 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-combined-ca-bundle\") pod \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\" (UID: \"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab\") " Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.439535 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d84905-781b-476c-8ba3-046c5b0a96f5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d3d84905-781b-476c-8ba3-046c5b0a96f5" (UID: "d3d84905-781b-476c-8ba3-046c5b0a96f5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.444850 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3d84905-781b-476c-8ba3-046c5b0a96f5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d3d84905-781b-476c-8ba3-046c5b0a96f5" (UID: "d3d84905-781b-476c-8ba3-046c5b0a96f5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.445477 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d84905-781b-476c-8ba3-046c5b0a96f5-kube-api-access-crcjf" (OuterVolumeSpecName: "kube-api-access-crcjf") pod "d3d84905-781b-476c-8ba3-046c5b0a96f5" (UID: "d3d84905-781b-476c-8ba3-046c5b0a96f5"). InnerVolumeSpecName "kube-api-access-crcjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.445828 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-logs" (OuterVolumeSpecName: "logs") pod "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" (UID: "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.463837 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-scripts" (OuterVolumeSpecName: "scripts") pod "d3d84905-781b-476c-8ba3-046c5b0a96f5" (UID: "d3d84905-781b-476c-8ba3-046c5b0a96f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.468009 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-kube-api-access-64z2v" (OuterVolumeSpecName: "kube-api-access-64z2v") pod "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" (UID: "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab"). InnerVolumeSpecName "kube-api-access-64z2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.480937 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" (UID: "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.487618 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d3d84905-781b-476c-8ba3-046c5b0a96f5" (UID: "d3d84905-781b-476c-8ba3-046c5b0a96f5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.494229 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" (UID: "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.498700 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" (UID: "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.513628 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-config-data" (OuterVolumeSpecName: "config-data") pod "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" (UID: "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.540288 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.540350 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.540362 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3d84905-781b-476c-8ba3-046c5b0a96f5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.540371 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64z2v\" (UniqueName: \"kubernetes.io/projected/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-kube-api-access-64z2v\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.540380 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.540388 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.540396 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.540404 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3d84905-781b-476c-8ba3-046c5b0a96f5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.540411 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.540420 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crcjf\" (UniqueName: \"kubernetes.io/projected/d3d84905-781b-476c-8ba3-046c5b0a96f5-kube-api-access-crcjf\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.540428 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.545152 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-5bqjw"] Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.550131 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d3d84905-781b-476c-8ba3-046c5b0a96f5" (UID: "d3d84905-781b-476c-8ba3-046c5b0a96f5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.550483 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" (UID: "f6e2e1ab-9f4e-4021-ab40-cc7f173991ab"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.552952 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-5bqjw"] Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.558891 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-config-data" (OuterVolumeSpecName: "config-data") pod "d3d84905-781b-476c-8ba3-046c5b0a96f5" (UID: "d3d84905-781b-476c-8ba3-046c5b0a96f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.560644 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg"] Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.560756 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3d84905-781b-476c-8ba3-046c5b0a96f5" (UID: "d3d84905-781b-476c-8ba3-046c5b0a96f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.566589 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-f5e5-account-create-update-5lpfg"] Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.572391 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherf5e5-account-delete-vdzqw"] Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.578268 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherf5e5-account-delete-vdzqw"] Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.640983 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.641012 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.641021 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:53 crc kubenswrapper[4813]: I0217 09:03:53.641031 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d84905-781b-476c-8ba3-046c5b0a96f5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.122953 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.122958 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3d84905-781b-476c-8ba3-046c5b0a96f5","Type":"ContainerDied","Data":"512dd6c354d1769a64bc4e8d28ebf815d935f209b43b95dfb2bb6efc23cca61c"} Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.123056 4813 scope.go:117] "RemoveContainer" containerID="936faa10f6eeb132451d190ac8b8b604c6066bb4ea7963ec98ff2a35fc097c0c" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.129253 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f6e2e1ab-9f4e-4021-ab40-cc7f173991ab","Type":"ContainerDied","Data":"024c091fd6a4996660ac5e952ca8fcef5644ed7bc21269dcb04f70d4a5ad1371"} Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.129401 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.155159 4813 scope.go:117] "RemoveContainer" containerID="998b10a664f24e4dce2acd8350255c9cadc49408c6235216d1cfc538de15419b" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.181978 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.205613 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.213405 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.217911 4813 scope.go:117] "RemoveContainer" containerID="f2938d69dbfd01aeb009236b3adcdf4f1d172cad51fa0cd88ca70430a9022195" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.222890 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229155 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:54 crc kubenswrapper[4813]: E0217 09:03:54.229523 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46514e64-cce7-49de-ab58-06aafff822c8" containerName="mariadb-account-delete" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229543 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="46514e64-cce7-49de-ab58-06aafff822c8" containerName="mariadb-account-delete" Feb 17 09:03:54 crc kubenswrapper[4813]: E0217 09:03:54.229551 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerName="proxy-httpd" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229558 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerName="proxy-httpd" Feb 17 09:03:54 crc kubenswrapper[4813]: E0217 09:03:54.229569 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8131fc07-9b89-45bb-bebb-a1630a6120af" containerName="watcher-decision-engine" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229578 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8131fc07-9b89-45bb-bebb-a1630a6120af" containerName="watcher-decision-engine" Feb 17 09:03:54 crc kubenswrapper[4813]: E0217 09:03:54.229585 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" containerName="watcher-kuttl-api-log" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229591 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" containerName="watcher-kuttl-api-log" Feb 17 09:03:54 crc kubenswrapper[4813]: E0217 09:03:54.229601 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerName="sg-core" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229607 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerName="sg-core" Feb 17 09:03:54 crc kubenswrapper[4813]: E0217 09:03:54.229621 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerName="ceilometer-central-agent" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229630 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerName="ceilometer-central-agent" Feb 17 09:03:54 crc kubenswrapper[4813]: E0217 09:03:54.229641 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerName="ceilometer-notification-agent" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229649 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerName="ceilometer-notification-agent" Feb 17 09:03:54 crc kubenswrapper[4813]: E0217 09:03:54.229660 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3" containerName="watcher-applier" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229667 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3" containerName="watcher-applier" Feb 17 09:03:54 crc kubenswrapper[4813]: E0217 09:03:54.229681 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" containerName="watcher-api" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229688 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" containerName="watcher-api" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229835 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" containerName="watcher-api" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229846 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerName="proxy-httpd" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229860 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" containerName="watcher-kuttl-api-log" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229867 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccdc699e-0784-4dc7-a515-bdd6d9d1d5a3" containerName="watcher-applier" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229875 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerName="ceilometer-central-agent" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229881 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8131fc07-9b89-45bb-bebb-a1630a6120af" containerName="watcher-decision-engine" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229887 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerName="ceilometer-notification-agent" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229896 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" containerName="sg-core" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.229908 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="46514e64-cce7-49de-ab58-06aafff822c8" containerName="mariadb-account-delete" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.231540 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.237046 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.237343 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.237469 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.238466 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.248967 4813 scope.go:117] "RemoveContainer" containerID="1fbd2c42922414c58e145012897c3868c17434fb873dfa85a50982a87e2bbd4f" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.256357 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-config-data\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.256410 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.256467 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-log-httpd\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.256551 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-scripts\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.256589 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.256661 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djxrl\" (UniqueName: \"kubernetes.io/projected/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-kube-api-access-djxrl\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.256899 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.257038 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-run-httpd\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.271497 4813 scope.go:117] "RemoveContainer" containerID="8a79990a1519da57eee2a488fa7869441b79a38b5db437833d9f0fbdd02edeb5" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.290420 4813 scope.go:117] "RemoveContainer" containerID="7a8180c1feeaa350d170d8f82d840987de70ebdb6123f3f8b69dada3b3729317" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.359020 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-run-httpd\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.359138 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-config-data\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.359191 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.359239 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-log-httpd\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.359335 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-scripts\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.359374 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.359438 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxrl\" (UniqueName: \"kubernetes.io/projected/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-kube-api-access-djxrl\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.359499 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.359869 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-run-httpd\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.359914 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-log-httpd\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.363707 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.364008 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-config-data\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.364782 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.365099 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.367244 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-scripts\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.374707 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djxrl\" (UniqueName: \"kubernetes.io/projected/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-kube-api-access-djxrl\") pod \"ceilometer-0\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:54 crc kubenswrapper[4813]: I0217 09:03:54.547753 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.003825 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:03:55 crc kubenswrapper[4813]: W0217 09:03:55.005776 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ea1d684_24a5_4f95_9e83_9da7b79bf5f7.slice/crio-bbdbf33b9ba02ebdb06b6c02c946f1f80522f9ee838a56fe1c4760ea8af12086 WatchSource:0}: Error finding container bbdbf33b9ba02ebdb06b6c02c946f1f80522f9ee838a56fe1c4760ea8af12086: Status 404 returned error can't find the container with id bbdbf33b9ba02ebdb06b6c02c946f1f80522f9ee838a56fe1c4760ea8af12086 Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.121963 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="254d5e59-34ee-45db-a654-1277da38cbee" path="/var/lib/kubelet/pods/254d5e59-34ee-45db-a654-1277da38cbee/volumes" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.122605 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46514e64-cce7-49de-ab58-06aafff822c8" path="/var/lib/kubelet/pods/46514e64-cce7-49de-ab58-06aafff822c8/volumes" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.123074 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8131fc07-9b89-45bb-bebb-a1630a6120af" path="/var/lib/kubelet/pods/8131fc07-9b89-45bb-bebb-a1630a6120af/volumes" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.123940 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d84905-781b-476c-8ba3-046c5b0a96f5" path="/var/lib/kubelet/pods/d3d84905-781b-476c-8ba3-046c5b0a96f5/volumes" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.124556 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d56492e9-7c2d-4713-8d1a-6948291ecd68" path="/var/lib/kubelet/pods/d56492e9-7c2d-4713-8d1a-6948291ecd68/volumes" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.125012 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6e2e1ab-9f4e-4021-ab40-cc7f173991ab" path="/var/lib/kubelet/pods/f6e2e1ab-9f4e-4021-ab40-cc7f173991ab/volumes" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.139463 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7","Type":"ContainerStarted","Data":"bbdbf33b9ba02ebdb06b6c02c946f1f80522f9ee838a56fe1c4760ea8af12086"} Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.751243 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-9p5qz"] Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.752641 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-9p5qz" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.764462 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-9p5qz"] Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.780494 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwpv9\" (UniqueName: \"kubernetes.io/projected/c09576b1-96ef-453c-bea5-c6b76a69e4aa-kube-api-access-zwpv9\") pod \"watcher-db-create-9p5qz\" (UID: \"c09576b1-96ef-453c-bea5-c6b76a69e4aa\") " pod="watcher-kuttl-default/watcher-db-create-9p5qz" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.780695 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c09576b1-96ef-453c-bea5-c6b76a69e4aa-operator-scripts\") pod \"watcher-db-create-9p5qz\" (UID: \"c09576b1-96ef-453c-bea5-c6b76a69e4aa\") " pod="watcher-kuttl-default/watcher-db-create-9p5qz" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.874161 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w"] Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.876470 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.882672 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c09576b1-96ef-453c-bea5-c6b76a69e4aa-operator-scripts\") pod \"watcher-db-create-9p5qz\" (UID: \"c09576b1-96ef-453c-bea5-c6b76a69e4aa\") " pod="watcher-kuttl-default/watcher-db-create-9p5qz" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.882760 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwpv9\" (UniqueName: \"kubernetes.io/projected/c09576b1-96ef-453c-bea5-c6b76a69e4aa-kube-api-access-zwpv9\") pod \"watcher-db-create-9p5qz\" (UID: \"c09576b1-96ef-453c-bea5-c6b76a69e4aa\") " pod="watcher-kuttl-default/watcher-db-create-9p5qz" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.882822 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373f7522-5889-457c-a4fd-611eeb468cf6-operator-scripts\") pod \"watcher-94fc-account-create-update-m2f7w\" (UID: \"373f7522-5889-457c-a4fd-611eeb468cf6\") " pod="watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.882885 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r2dq\" (UniqueName: \"kubernetes.io/projected/373f7522-5889-457c-a4fd-611eeb468cf6-kube-api-access-2r2dq\") pod \"watcher-94fc-account-create-update-m2f7w\" (UID: \"373f7522-5889-457c-a4fd-611eeb468cf6\") " pod="watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.883898 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c09576b1-96ef-453c-bea5-c6b76a69e4aa-operator-scripts\") pod \"watcher-db-create-9p5qz\" (UID: \"c09576b1-96ef-453c-bea5-c6b76a69e4aa\") " pod="watcher-kuttl-default/watcher-db-create-9p5qz" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.884446 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.884955 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w"] Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.910861 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwpv9\" (UniqueName: \"kubernetes.io/projected/c09576b1-96ef-453c-bea5-c6b76a69e4aa-kube-api-access-zwpv9\") pod \"watcher-db-create-9p5qz\" (UID: \"c09576b1-96ef-453c-bea5-c6b76a69e4aa\") " pod="watcher-kuttl-default/watcher-db-create-9p5qz" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.983510 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373f7522-5889-457c-a4fd-611eeb468cf6-operator-scripts\") pod \"watcher-94fc-account-create-update-m2f7w\" (UID: \"373f7522-5889-457c-a4fd-611eeb468cf6\") " pod="watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.983570 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r2dq\" (UniqueName: \"kubernetes.io/projected/373f7522-5889-457c-a4fd-611eeb468cf6-kube-api-access-2r2dq\") pod \"watcher-94fc-account-create-update-m2f7w\" (UID: \"373f7522-5889-457c-a4fd-611eeb468cf6\") " pod="watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w" Feb 17 09:03:55 crc kubenswrapper[4813]: I0217 09:03:55.984288 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373f7522-5889-457c-a4fd-611eeb468cf6-operator-scripts\") pod \"watcher-94fc-account-create-update-m2f7w\" (UID: \"373f7522-5889-457c-a4fd-611eeb468cf6\") " pod="watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w" Feb 17 09:03:56 crc kubenswrapper[4813]: I0217 09:03:56.006771 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r2dq\" (UniqueName: \"kubernetes.io/projected/373f7522-5889-457c-a4fd-611eeb468cf6-kube-api-access-2r2dq\") pod \"watcher-94fc-account-create-update-m2f7w\" (UID: \"373f7522-5889-457c-a4fd-611eeb468cf6\") " pod="watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w" Feb 17 09:03:56 crc kubenswrapper[4813]: I0217 09:03:56.068442 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-9p5qz" Feb 17 09:03:56 crc kubenswrapper[4813]: I0217 09:03:56.157507 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7","Type":"ContainerStarted","Data":"d7c74648f46a75b52a5cd313f8fa93c6dced3dd6969f0ffbc191ebc4a058c586"} Feb 17 09:03:56 crc kubenswrapper[4813]: I0217 09:03:56.206797 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w" Feb 17 09:03:56 crc kubenswrapper[4813]: I0217 09:03:56.600493 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-9p5qz"] Feb 17 09:03:56 crc kubenswrapper[4813]: W0217 09:03:56.603135 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc09576b1_96ef_453c_bea5_c6b76a69e4aa.slice/crio-da19f0c4767c2f7d5347ce3587fbc7914b2cca35c8702e09bfeb9ca9c828c3f7 WatchSource:0}: Error finding container da19f0c4767c2f7d5347ce3587fbc7914b2cca35c8702e09bfeb9ca9c828c3f7: Status 404 returned error can't find the container with id da19f0c4767c2f7d5347ce3587fbc7914b2cca35c8702e09bfeb9ca9c828c3f7 Feb 17 09:03:56 crc kubenswrapper[4813]: I0217 09:03:56.762567 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w"] Feb 17 09:03:57 crc kubenswrapper[4813]: I0217 09:03:57.166021 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7","Type":"ContainerStarted","Data":"b88a0681dc427e1d2d157d4a188f76d5543e691f3e76cd33c6f221d244dd7ee5"} Feb 17 09:03:57 crc kubenswrapper[4813]: I0217 09:03:57.166334 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7","Type":"ContainerStarted","Data":"5a2bfb57081418f0b0cbd979ffc39bd90bc6114cee7ef91a8140f3b6f4663a2c"} Feb 17 09:03:57 crc kubenswrapper[4813]: I0217 09:03:57.167865 4813 generic.go:334] "Generic (PLEG): container finished" podID="373f7522-5889-457c-a4fd-611eeb468cf6" containerID="8f5a36869b7208ed9002b79d5287230810331e49b113dbd534aae90d865e2529" exitCode=0 Feb 17 09:03:57 crc kubenswrapper[4813]: I0217 09:03:57.167939 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w" event={"ID":"373f7522-5889-457c-a4fd-611eeb468cf6","Type":"ContainerDied","Data":"8f5a36869b7208ed9002b79d5287230810331e49b113dbd534aae90d865e2529"} Feb 17 09:03:57 crc kubenswrapper[4813]: I0217 09:03:57.167982 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w" event={"ID":"373f7522-5889-457c-a4fd-611eeb468cf6","Type":"ContainerStarted","Data":"258d8b25d44fb3968642bb6fec8c711fbe0b63f6527a1bf1554b3c95f619b96d"} Feb 17 09:03:57 crc kubenswrapper[4813]: I0217 09:03:57.169395 4813 generic.go:334] "Generic (PLEG): container finished" podID="c09576b1-96ef-453c-bea5-c6b76a69e4aa" containerID="749f9ab04aad404668df7ac3c8235566fef6761e0e79f144881aabf316deb623" exitCode=0 Feb 17 09:03:57 crc kubenswrapper[4813]: I0217 09:03:57.169548 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-9p5qz" event={"ID":"c09576b1-96ef-453c-bea5-c6b76a69e4aa","Type":"ContainerDied","Data":"749f9ab04aad404668df7ac3c8235566fef6761e0e79f144881aabf316deb623"} Feb 17 09:03:57 crc kubenswrapper[4813]: I0217 09:03:57.169580 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-9p5qz" event={"ID":"c09576b1-96ef-453c-bea5-c6b76a69e4aa","Type":"ContainerStarted","Data":"da19f0c4767c2f7d5347ce3587fbc7914b2cca35c8702e09bfeb9ca9c828c3f7"} Feb 17 09:03:58 crc kubenswrapper[4813]: I0217 09:03:58.608119 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-9p5qz" Feb 17 09:03:58 crc kubenswrapper[4813]: I0217 09:03:58.612696 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w" Feb 17 09:03:58 crc kubenswrapper[4813]: I0217 09:03:58.731208 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwpv9\" (UniqueName: \"kubernetes.io/projected/c09576b1-96ef-453c-bea5-c6b76a69e4aa-kube-api-access-zwpv9\") pod \"c09576b1-96ef-453c-bea5-c6b76a69e4aa\" (UID: \"c09576b1-96ef-453c-bea5-c6b76a69e4aa\") " Feb 17 09:03:58 crc kubenswrapper[4813]: I0217 09:03:58.731344 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r2dq\" (UniqueName: \"kubernetes.io/projected/373f7522-5889-457c-a4fd-611eeb468cf6-kube-api-access-2r2dq\") pod \"373f7522-5889-457c-a4fd-611eeb468cf6\" (UID: \"373f7522-5889-457c-a4fd-611eeb468cf6\") " Feb 17 09:03:58 crc kubenswrapper[4813]: I0217 09:03:58.731406 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373f7522-5889-457c-a4fd-611eeb468cf6-operator-scripts\") pod \"373f7522-5889-457c-a4fd-611eeb468cf6\" (UID: \"373f7522-5889-457c-a4fd-611eeb468cf6\") " Feb 17 09:03:58 crc kubenswrapper[4813]: I0217 09:03:58.731439 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c09576b1-96ef-453c-bea5-c6b76a69e4aa-operator-scripts\") pod \"c09576b1-96ef-453c-bea5-c6b76a69e4aa\" (UID: \"c09576b1-96ef-453c-bea5-c6b76a69e4aa\") " Feb 17 09:03:58 crc kubenswrapper[4813]: I0217 09:03:58.731928 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/373f7522-5889-457c-a4fd-611eeb468cf6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "373f7522-5889-457c-a4fd-611eeb468cf6" (UID: "373f7522-5889-457c-a4fd-611eeb468cf6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:03:58 crc kubenswrapper[4813]: I0217 09:03:58.732144 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09576b1-96ef-453c-bea5-c6b76a69e4aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c09576b1-96ef-453c-bea5-c6b76a69e4aa" (UID: "c09576b1-96ef-453c-bea5-c6b76a69e4aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:03:58 crc kubenswrapper[4813]: I0217 09:03:58.735507 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/373f7522-5889-457c-a4fd-611eeb468cf6-kube-api-access-2r2dq" (OuterVolumeSpecName: "kube-api-access-2r2dq") pod "373f7522-5889-457c-a4fd-611eeb468cf6" (UID: "373f7522-5889-457c-a4fd-611eeb468cf6"). InnerVolumeSpecName "kube-api-access-2r2dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:58 crc kubenswrapper[4813]: I0217 09:03:58.736024 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09576b1-96ef-453c-bea5-c6b76a69e4aa-kube-api-access-zwpv9" (OuterVolumeSpecName: "kube-api-access-zwpv9") pod "c09576b1-96ef-453c-bea5-c6b76a69e4aa" (UID: "c09576b1-96ef-453c-bea5-c6b76a69e4aa"). InnerVolumeSpecName "kube-api-access-zwpv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:03:58 crc kubenswrapper[4813]: I0217 09:03:58.833904 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwpv9\" (UniqueName: \"kubernetes.io/projected/c09576b1-96ef-453c-bea5-c6b76a69e4aa-kube-api-access-zwpv9\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:58 crc kubenswrapper[4813]: I0217 09:03:58.833950 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r2dq\" (UniqueName: \"kubernetes.io/projected/373f7522-5889-457c-a4fd-611eeb468cf6-kube-api-access-2r2dq\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:58 crc kubenswrapper[4813]: I0217 09:03:58.833965 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373f7522-5889-457c-a4fd-611eeb468cf6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:58 crc kubenswrapper[4813]: I0217 09:03:58.833976 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c09576b1-96ef-453c-bea5-c6b76a69e4aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:03:59 crc kubenswrapper[4813]: I0217 09:03:59.186866 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7","Type":"ContainerStarted","Data":"b0fa36ae085a23b1f99bcce2e1ed696f31839da703cd62adc6524f1c040cb8bc"} Feb 17 09:03:59 crc kubenswrapper[4813]: I0217 09:03:59.188298 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:03:59 crc kubenswrapper[4813]: I0217 09:03:59.192724 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w" event={"ID":"373f7522-5889-457c-a4fd-611eeb468cf6","Type":"ContainerDied","Data":"258d8b25d44fb3968642bb6fec8c711fbe0b63f6527a1bf1554b3c95f619b96d"} Feb 17 09:03:59 crc kubenswrapper[4813]: I0217 09:03:59.192756 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="258d8b25d44fb3968642bb6fec8c711fbe0b63f6527a1bf1554b3c95f619b96d" Feb 17 09:03:59 crc kubenswrapper[4813]: I0217 09:03:59.192812 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w" Feb 17 09:03:59 crc kubenswrapper[4813]: I0217 09:03:59.194875 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-9p5qz" event={"ID":"c09576b1-96ef-453c-bea5-c6b76a69e4aa","Type":"ContainerDied","Data":"da19f0c4767c2f7d5347ce3587fbc7914b2cca35c8702e09bfeb9ca9c828c3f7"} Feb 17 09:03:59 crc kubenswrapper[4813]: I0217 09:03:59.194903 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da19f0c4767c2f7d5347ce3587fbc7914b2cca35c8702e09bfeb9ca9c828c3f7" Feb 17 09:03:59 crc kubenswrapper[4813]: I0217 09:03:59.194913 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-9p5qz" Feb 17 09:03:59 crc kubenswrapper[4813]: I0217 09:03:59.232225 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.217672925 podStartE2EDuration="5.232173708s" podCreationTimestamp="2026-02-17 09:03:54 +0000 UTC" firstStartedPulling="2026-02-17 09:03:55.007323111 +0000 UTC m=+1382.668084334" lastFinishedPulling="2026-02-17 09:03:58.021823894 +0000 UTC m=+1385.682585117" observedRunningTime="2026-02-17 09:03:59.220241649 +0000 UTC m=+1386.881002882" watchObservedRunningTime="2026-02-17 09:03:59.232173708 +0000 UTC m=+1386.892934961" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.200702 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb"] Feb 17 09:04:01 crc kubenswrapper[4813]: E0217 09:04:01.202165 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09576b1-96ef-453c-bea5-c6b76a69e4aa" containerName="mariadb-database-create" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.202276 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09576b1-96ef-453c-bea5-c6b76a69e4aa" containerName="mariadb-database-create" Feb 17 09:04:01 crc kubenswrapper[4813]: E0217 09:04:01.202391 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373f7522-5889-457c-a4fd-611eeb468cf6" containerName="mariadb-account-create-update" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.202467 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="373f7522-5889-457c-a4fd-611eeb468cf6" containerName="mariadb-account-create-update" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.202741 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="373f7522-5889-457c-a4fd-611eeb468cf6" containerName="mariadb-account-create-update" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.202845 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09576b1-96ef-453c-bea5-c6b76a69e4aa" containerName="mariadb-database-create" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.203547 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.208719 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.209255 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-fz7fp" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.209602 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb"] Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.372594 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-config-data\") pod \"watcher-kuttl-db-sync-jtqlb\" (UID: \"1c21930c-0101-46c6-82ff-08fcae8ecb02\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.372685 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-db-sync-config-data\") pod \"watcher-kuttl-db-sync-jtqlb\" (UID: \"1c21930c-0101-46c6-82ff-08fcae8ecb02\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.372716 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx9vq\" (UniqueName: \"kubernetes.io/projected/1c21930c-0101-46c6-82ff-08fcae8ecb02-kube-api-access-kx9vq\") pod \"watcher-kuttl-db-sync-jtqlb\" (UID: \"1c21930c-0101-46c6-82ff-08fcae8ecb02\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.372746 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-jtqlb\" (UID: \"1c21930c-0101-46c6-82ff-08fcae8ecb02\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.474405 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-jtqlb\" (UID: \"1c21930c-0101-46c6-82ff-08fcae8ecb02\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.474526 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-config-data\") pod \"watcher-kuttl-db-sync-jtqlb\" (UID: \"1c21930c-0101-46c6-82ff-08fcae8ecb02\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.474580 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-db-sync-config-data\") pod \"watcher-kuttl-db-sync-jtqlb\" (UID: \"1c21930c-0101-46c6-82ff-08fcae8ecb02\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.474600 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vq\" (UniqueName: \"kubernetes.io/projected/1c21930c-0101-46c6-82ff-08fcae8ecb02-kube-api-access-kx9vq\") pod \"watcher-kuttl-db-sync-jtqlb\" (UID: \"1c21930c-0101-46c6-82ff-08fcae8ecb02\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.479703 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-db-sync-config-data\") pod \"watcher-kuttl-db-sync-jtqlb\" (UID: \"1c21930c-0101-46c6-82ff-08fcae8ecb02\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.480878 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-config-data\") pod \"watcher-kuttl-db-sync-jtqlb\" (UID: \"1c21930c-0101-46c6-82ff-08fcae8ecb02\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.481052 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-jtqlb\" (UID: \"1c21930c-0101-46c6-82ff-08fcae8ecb02\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.494435 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx9vq\" (UniqueName: \"kubernetes.io/projected/1c21930c-0101-46c6-82ff-08fcae8ecb02-kube-api-access-kx9vq\") pod \"watcher-kuttl-db-sync-jtqlb\" (UID: \"1c21930c-0101-46c6-82ff-08fcae8ecb02\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.524369 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" Feb 17 09:04:01 crc kubenswrapper[4813]: I0217 09:04:01.969795 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb"] Feb 17 09:04:02 crc kubenswrapper[4813]: I0217 09:04:02.251970 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" event={"ID":"1c21930c-0101-46c6-82ff-08fcae8ecb02","Type":"ContainerStarted","Data":"b3df3c95d0e6625b92ac626e93daf0905c2dab15eda9c9f9cdd9a39db3099dc2"} Feb 17 09:04:02 crc kubenswrapper[4813]: I0217 09:04:02.252326 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" event={"ID":"1c21930c-0101-46c6-82ff-08fcae8ecb02","Type":"ContainerStarted","Data":"1a8d8cb2f1fe548e8f3d37664e9f3efedba0bdface2ff10c5e7ba778f969b606"} Feb 17 09:04:02 crc kubenswrapper[4813]: I0217 09:04:02.273721 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" podStartSLOduration=1.273704021 podStartE2EDuration="1.273704021s" podCreationTimestamp="2026-02-17 09:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:04:02.26524231 +0000 UTC m=+1389.926003533" watchObservedRunningTime="2026-02-17 09:04:02.273704021 +0000 UTC m=+1389.934465244" Feb 17 09:04:05 crc kubenswrapper[4813]: I0217 09:04:05.281343 4813 generic.go:334] "Generic (PLEG): container finished" podID="1c21930c-0101-46c6-82ff-08fcae8ecb02" containerID="b3df3c95d0e6625b92ac626e93daf0905c2dab15eda9c9f9cdd9a39db3099dc2" exitCode=0 Feb 17 09:04:05 crc kubenswrapper[4813]: I0217 09:04:05.281733 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" event={"ID":"1c21930c-0101-46c6-82ff-08fcae8ecb02","Type":"ContainerDied","Data":"b3df3c95d0e6625b92ac626e93daf0905c2dab15eda9c9f9cdd9a39db3099dc2"} Feb 17 09:04:06 crc kubenswrapper[4813]: I0217 09:04:06.674482 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" Feb 17 09:04:06 crc kubenswrapper[4813]: I0217 09:04:06.765810 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx9vq\" (UniqueName: \"kubernetes.io/projected/1c21930c-0101-46c6-82ff-08fcae8ecb02-kube-api-access-kx9vq\") pod \"1c21930c-0101-46c6-82ff-08fcae8ecb02\" (UID: \"1c21930c-0101-46c6-82ff-08fcae8ecb02\") " Feb 17 09:04:06 crc kubenswrapper[4813]: I0217 09:04:06.765865 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-combined-ca-bundle\") pod \"1c21930c-0101-46c6-82ff-08fcae8ecb02\" (UID: \"1c21930c-0101-46c6-82ff-08fcae8ecb02\") " Feb 17 09:04:06 crc kubenswrapper[4813]: I0217 09:04:06.765908 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-config-data\") pod \"1c21930c-0101-46c6-82ff-08fcae8ecb02\" (UID: \"1c21930c-0101-46c6-82ff-08fcae8ecb02\") " Feb 17 09:04:06 crc kubenswrapper[4813]: I0217 09:04:06.765939 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-db-sync-config-data\") pod \"1c21930c-0101-46c6-82ff-08fcae8ecb02\" (UID: \"1c21930c-0101-46c6-82ff-08fcae8ecb02\") " Feb 17 09:04:06 crc kubenswrapper[4813]: I0217 09:04:06.772060 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1c21930c-0101-46c6-82ff-08fcae8ecb02" (UID: "1c21930c-0101-46c6-82ff-08fcae8ecb02"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:06 crc kubenswrapper[4813]: I0217 09:04:06.778180 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c21930c-0101-46c6-82ff-08fcae8ecb02-kube-api-access-kx9vq" (OuterVolumeSpecName: "kube-api-access-kx9vq") pod "1c21930c-0101-46c6-82ff-08fcae8ecb02" (UID: "1c21930c-0101-46c6-82ff-08fcae8ecb02"). InnerVolumeSpecName "kube-api-access-kx9vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:04:06 crc kubenswrapper[4813]: I0217 09:04:06.791541 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c21930c-0101-46c6-82ff-08fcae8ecb02" (UID: "1c21930c-0101-46c6-82ff-08fcae8ecb02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:06 crc kubenswrapper[4813]: I0217 09:04:06.812305 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-config-data" (OuterVolumeSpecName: "config-data") pod "1c21930c-0101-46c6-82ff-08fcae8ecb02" (UID: "1c21930c-0101-46c6-82ff-08fcae8ecb02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:06 crc kubenswrapper[4813]: I0217 09:04:06.867376 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx9vq\" (UniqueName: \"kubernetes.io/projected/1c21930c-0101-46c6-82ff-08fcae8ecb02-kube-api-access-kx9vq\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:06 crc kubenswrapper[4813]: I0217 09:04:06.867409 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:06 crc kubenswrapper[4813]: I0217 09:04:06.867419 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:06 crc kubenswrapper[4813]: I0217 09:04:06.867427 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1c21930c-0101-46c6-82ff-08fcae8ecb02-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.304425 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" event={"ID":"1c21930c-0101-46c6-82ff-08fcae8ecb02","Type":"ContainerDied","Data":"1a8d8cb2f1fe548e8f3d37664e9f3efedba0bdface2ff10c5e7ba778f969b606"} Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.304468 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a8d8cb2f1fe548e8f3d37664e9f3efedba0bdface2ff10c5e7ba778f969b606" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.304858 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.674079 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:04:07 crc kubenswrapper[4813]: E0217 09:04:07.674475 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c21930c-0101-46c6-82ff-08fcae8ecb02" containerName="watcher-kuttl-db-sync" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.674497 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c21930c-0101-46c6-82ff-08fcae8ecb02" containerName="watcher-kuttl-db-sync" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.674690 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c21930c-0101-46c6-82ff-08fcae8ecb02" containerName="watcher-kuttl-db-sync" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.675713 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.684086 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.684153 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-fz7fp" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.685794 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.707269 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.708347 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.711990 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.767674 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.781390 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.781448 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jclvg\" (UniqueName: \"kubernetes.io/projected/f51f7109-2fd1-44a0-a988-276436bd1498-kube-api-access-jclvg\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.781489 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.781519 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.781542 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f51f7109-2fd1-44a0-a988-276436bd1498-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.781558 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.781585 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.799989 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.826922 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.828114 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.847426 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.876435 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.882085 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.883062 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.893370 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.893647 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jclvg\" (UniqueName: \"kubernetes.io/projected/f51f7109-2fd1-44a0-a988-276436bd1498-kube-api-access-jclvg\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.893779 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z624\" (UniqueName: \"kubernetes.io/projected/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-kube-api-access-2z624\") pod \"watcher-kuttl-applier-0\" (UID: \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.895254 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.895621 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.895726 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.897091 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.897194 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f51f7109-2fd1-44a0-a988-276436bd1498-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.897267 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.897519 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.898631 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.899607 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f51f7109-2fd1-44a0-a988-276436bd1498-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.904875 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.907853 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.907889 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.917127 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.982767 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jclvg\" (UniqueName: \"kubernetes.io/projected/f51f7109-2fd1-44a0-a988-276436bd1498-kube-api-access-jclvg\") pod \"watcher-kuttl-api-0\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:07 crc kubenswrapper[4813]: I0217 09:04:07.990046 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.004748 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z624\" (UniqueName: \"kubernetes.io/projected/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-kube-api-access-2z624\") pod \"watcher-kuttl-applier-0\" (UID: \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.004834 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.004875 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.004915 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.004946 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.004965 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ntl2\" (UniqueName: \"kubernetes.io/projected/e4c08d2c-277c-46e7-a274-56a389113175-kube-api-access-4ntl2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.005002 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.005031 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.005070 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4c08d2c-277c-46e7-a274-56a389113175-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.005748 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.009805 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.010907 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.064031 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z624\" (UniqueName: \"kubernetes.io/projected/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-kube-api-access-2z624\") pod \"watcher-kuttl-applier-0\" (UID: \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.106364 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.106438 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4c08d2c-277c-46e7-a274-56a389113175-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.106504 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.106531 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.106555 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ntl2\" (UniqueName: \"kubernetes.io/projected/e4c08d2c-277c-46e7-a274-56a389113175-kube-api-access-4ntl2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.107378 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4c08d2c-277c-46e7-a274-56a389113175-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.112123 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.112804 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.114275 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.128862 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ntl2\" (UniqueName: \"kubernetes.io/projected/e4c08d2c-277c-46e7-a274-56a389113175-kube-api-access-4ntl2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.151634 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.328493 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.565410 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:04:08 crc kubenswrapper[4813]: W0217 09:04:08.570234 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf51f7109_2fd1_44a0_a988_276436bd1498.slice/crio-5a13340e3507fa130333abdb7dff9ea69b4c485da777e9c0730f6cfab9d61352 WatchSource:0}: Error finding container 5a13340e3507fa130333abdb7dff9ea69b4c485da777e9c0730f6cfab9d61352: Status 404 returned error can't find the container with id 5a13340e3507fa130333abdb7dff9ea69b4c485da777e9c0730f6cfab9d61352 Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.697812 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:04:08 crc kubenswrapper[4813]: I0217 09:04:08.861294 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:04:08 crc kubenswrapper[4813]: W0217 09:04:08.868483 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb6c5674_7a47_4f50_9836_5e13d6b6ee0e.slice/crio-c311210bf9bfae4acf6a41962e036d9b5612e2ecd5d314c451fd7549efaaa07c WatchSource:0}: Error finding container c311210bf9bfae4acf6a41962e036d9b5612e2ecd5d314c451fd7549efaaa07c: Status 404 returned error can't find the container with id c311210bf9bfae4acf6a41962e036d9b5612e2ecd5d314c451fd7549efaaa07c Feb 17 09:04:09 crc kubenswrapper[4813]: I0217 09:04:09.350432 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e","Type":"ContainerStarted","Data":"7ae2b9a9be7d3b106d42f7d963b6f1ab955679abf36954ba1376e65e0dfc6d83"} Feb 17 09:04:09 crc kubenswrapper[4813]: I0217 09:04:09.351193 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e","Type":"ContainerStarted","Data":"c311210bf9bfae4acf6a41962e036d9b5612e2ecd5d314c451fd7549efaaa07c"} Feb 17 09:04:09 crc kubenswrapper[4813]: I0217 09:04:09.355622 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e4c08d2c-277c-46e7-a274-56a389113175","Type":"ContainerStarted","Data":"5e84c8eae71bc17e0c67a39a6787b948175bd60ad3aeba79408066389fb0809a"} Feb 17 09:04:09 crc kubenswrapper[4813]: I0217 09:04:09.355686 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e4c08d2c-277c-46e7-a274-56a389113175","Type":"ContainerStarted","Data":"88a1a8114a0602b503b1333ff366205a66a518ac54056de6ed8d96fe20d371a8"} Feb 17 09:04:09 crc kubenswrapper[4813]: I0217 09:04:09.357841 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f51f7109-2fd1-44a0-a988-276436bd1498","Type":"ContainerStarted","Data":"33626573aa204ef5fda9941c244bd161695b4873c95b6c3ee96dec46cef954f1"} Feb 17 09:04:09 crc kubenswrapper[4813]: I0217 09:04:09.357883 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f51f7109-2fd1-44a0-a988-276436bd1498","Type":"ContainerStarted","Data":"74643e846b4183f4228ee78b16b82485187ab1bdeec381ae08fcb4088e4e9110"} Feb 17 09:04:09 crc kubenswrapper[4813]: I0217 09:04:09.357898 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f51f7109-2fd1-44a0-a988-276436bd1498","Type":"ContainerStarted","Data":"5a13340e3507fa130333abdb7dff9ea69b4c485da777e9c0730f6cfab9d61352"} Feb 17 09:04:09 crc kubenswrapper[4813]: I0217 09:04:09.358563 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:09 crc kubenswrapper[4813]: I0217 09:04:09.378573 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.378545307 podStartE2EDuration="2.378545307s" podCreationTimestamp="2026-02-17 09:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:04:09.370642792 +0000 UTC m=+1397.031404015" watchObservedRunningTime="2026-02-17 09:04:09.378545307 +0000 UTC m=+1397.039306530" Feb 17 09:04:09 crc kubenswrapper[4813]: I0217 09:04:09.421217 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.42119988 podStartE2EDuration="2.42119988s" podCreationTimestamp="2026-02-17 09:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:04:09.414684815 +0000 UTC m=+1397.075446058" watchObservedRunningTime="2026-02-17 09:04:09.42119988 +0000 UTC m=+1397.081961103" Feb 17 09:04:09 crc kubenswrapper[4813]: I0217 09:04:09.422342 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.422333773 podStartE2EDuration="2.422333773s" podCreationTimestamp="2026-02-17 09:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:04:09.397561618 +0000 UTC m=+1397.058322841" watchObservedRunningTime="2026-02-17 09:04:09.422333773 +0000 UTC m=+1397.083095006" Feb 17 09:04:11 crc kubenswrapper[4813]: I0217 09:04:11.373557 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 09:04:11 crc kubenswrapper[4813]: I0217 09:04:11.423793 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:12 crc kubenswrapper[4813]: I0217 09:04:12.990921 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:13 crc kubenswrapper[4813]: I0217 09:04:13.329044 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:17 crc kubenswrapper[4813]: I0217 09:04:17.991099 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:18 crc kubenswrapper[4813]: I0217 09:04:18.003019 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:18 crc kubenswrapper[4813]: I0217 09:04:18.152747 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:18 crc kubenswrapper[4813]: I0217 09:04:18.195424 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:18 crc kubenswrapper[4813]: I0217 09:04:18.329056 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:18 crc kubenswrapper[4813]: I0217 09:04:18.370728 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:18 crc kubenswrapper[4813]: I0217 09:04:18.436690 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:18 crc kubenswrapper[4813]: I0217 09:04:18.449643 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:18 crc kubenswrapper[4813]: I0217 09:04:18.475544 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:18 crc kubenswrapper[4813]: I0217 09:04:18.511644 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:20 crc kubenswrapper[4813]: I0217 09:04:20.575087 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:04:20 crc kubenswrapper[4813]: I0217 09:04:20.575762 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="ceilometer-central-agent" containerID="cri-o://d7c74648f46a75b52a5cd313f8fa93c6dced3dd6969f0ffbc191ebc4a058c586" gracePeriod=30 Feb 17 09:04:20 crc kubenswrapper[4813]: I0217 09:04:20.576488 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="proxy-httpd" containerID="cri-o://b0fa36ae085a23b1f99bcce2e1ed696f31839da703cd62adc6524f1c040cb8bc" gracePeriod=30 Feb 17 09:04:20 crc kubenswrapper[4813]: I0217 09:04:20.576530 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="sg-core" containerID="cri-o://b88a0681dc427e1d2d157d4a188f76d5543e691f3e76cd33c6f221d244dd7ee5" gracePeriod=30 Feb 17 09:04:20 crc kubenswrapper[4813]: I0217 09:04:20.576511 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="ceilometer-notification-agent" containerID="cri-o://5a2bfb57081418f0b0cbd979ffc39bd90bc6114cee7ef91a8140f3b6f4663a2c" gracePeriod=30 Feb 17 09:04:20 crc kubenswrapper[4813]: I0217 09:04:20.620534 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.171:3000/\": EOF" Feb 17 09:04:21 crc kubenswrapper[4813]: I0217 09:04:21.465782 4813 generic.go:334] "Generic (PLEG): container finished" podID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerID="b0fa36ae085a23b1f99bcce2e1ed696f31839da703cd62adc6524f1c040cb8bc" exitCode=0 Feb 17 09:04:21 crc kubenswrapper[4813]: I0217 09:04:21.466220 4813 generic.go:334] "Generic (PLEG): container finished" podID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerID="b88a0681dc427e1d2d157d4a188f76d5543e691f3e76cd33c6f221d244dd7ee5" exitCode=2 Feb 17 09:04:21 crc kubenswrapper[4813]: I0217 09:04:21.465864 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7","Type":"ContainerDied","Data":"b0fa36ae085a23b1f99bcce2e1ed696f31839da703cd62adc6524f1c040cb8bc"} Feb 17 09:04:21 crc kubenswrapper[4813]: I0217 09:04:21.466279 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7","Type":"ContainerDied","Data":"b88a0681dc427e1d2d157d4a188f76d5543e691f3e76cd33c6f221d244dd7ee5"} Feb 17 09:04:21 crc kubenswrapper[4813]: I0217 09:04:21.466303 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7","Type":"ContainerDied","Data":"d7c74648f46a75b52a5cd313f8fa93c6dced3dd6969f0ffbc191ebc4a058c586"} Feb 17 09:04:21 crc kubenswrapper[4813]: I0217 09:04:21.466239 4813 generic.go:334] "Generic (PLEG): container finished" podID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerID="d7c74648f46a75b52a5cd313f8fa93c6dced3dd6969f0ffbc191ebc4a058c586" exitCode=0 Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.479798 4813 generic.go:334] "Generic (PLEG): container finished" podID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerID="5a2bfb57081418f0b0cbd979ffc39bd90bc6114cee7ef91a8140f3b6f4663a2c" exitCode=0 Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.480025 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7","Type":"ContainerDied","Data":"5a2bfb57081418f0b0cbd979ffc39bd90bc6114cee7ef91a8140f3b6f4663a2c"} Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.647735 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.744049 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-sg-core-conf-yaml\") pod \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.744190 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-ceilometer-tls-certs\") pod \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.744243 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-log-httpd\") pod \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.744509 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djxrl\" (UniqueName: \"kubernetes.io/projected/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-kube-api-access-djxrl\") pod \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.744590 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-combined-ca-bundle\") pod \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.744631 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-scripts\") pod \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.744666 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-config-data\") pod \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.744702 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-run-httpd\") pod \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\" (UID: \"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7\") " Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.744828 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" (UID: "0ea1d684-24a5-4f95-9e83-9da7b79bf5f7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.745199 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.746302 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" (UID: "0ea1d684-24a5-4f95-9e83-9da7b79bf5f7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.754273 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-kube-api-access-djxrl" (OuterVolumeSpecName: "kube-api-access-djxrl") pod "0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" (UID: "0ea1d684-24a5-4f95-9e83-9da7b79bf5f7"). InnerVolumeSpecName "kube-api-access-djxrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.768129 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-scripts" (OuterVolumeSpecName: "scripts") pod "0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" (UID: "0ea1d684-24a5-4f95-9e83-9da7b79bf5f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.768473 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" (UID: "0ea1d684-24a5-4f95-9e83-9da7b79bf5f7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.791345 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" (UID: "0ea1d684-24a5-4f95-9e83-9da7b79bf5f7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.834388 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-config-data" (OuterVolumeSpecName: "config-data") pod "0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" (UID: "0ea1d684-24a5-4f95-9e83-9da7b79bf5f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.840582 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" (UID: "0ea1d684-24a5-4f95-9e83-9da7b79bf5f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.846381 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.846404 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.846414 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djxrl\" (UniqueName: \"kubernetes.io/projected/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-kube-api-access-djxrl\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.846423 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.846431 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.846440 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:22 crc kubenswrapper[4813]: I0217 09:04:22.846450 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.498620 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0ea1d684-24a5-4f95-9e83-9da7b79bf5f7","Type":"ContainerDied","Data":"bbdbf33b9ba02ebdb06b6c02c946f1f80522f9ee838a56fe1c4760ea8af12086"} Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.498698 4813 scope.go:117] "RemoveContainer" containerID="b0fa36ae085a23b1f99bcce2e1ed696f31839da703cd62adc6524f1c040cb8bc" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.498725 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.534607 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.551537 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.553364 4813 scope.go:117] "RemoveContainer" containerID="b88a0681dc427e1d2d157d4a188f76d5543e691f3e76cd33c6f221d244dd7ee5" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.560103 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:04:23 crc kubenswrapper[4813]: E0217 09:04:23.560587 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="sg-core" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.560613 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="sg-core" Feb 17 09:04:23 crc kubenswrapper[4813]: E0217 09:04:23.560626 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="proxy-httpd" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.560634 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="proxy-httpd" Feb 17 09:04:23 crc kubenswrapper[4813]: E0217 09:04:23.560659 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="ceilometer-notification-agent" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.560668 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="ceilometer-notification-agent" Feb 17 09:04:23 crc kubenswrapper[4813]: E0217 09:04:23.560687 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="ceilometer-central-agent" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.560694 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="ceilometer-central-agent" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.560896 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="ceilometer-notification-agent" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.560921 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="sg-core" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.560933 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="proxy-httpd" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.560950 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" containerName="ceilometer-central-agent" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.562660 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.565227 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.565469 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.566121 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.583478 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.605758 4813 scope.go:117] "RemoveContainer" containerID="5a2bfb57081418f0b0cbd979ffc39bd90bc6114cee7ef91a8140f3b6f4663a2c" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.641887 4813 scope.go:117] "RemoveContainer" containerID="d7c74648f46a75b52a5cd313f8fa93c6dced3dd6969f0ffbc191ebc4a058c586" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.657749 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85ac76b-def9-4260-a408-eecd5c9a3760-run-httpd\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.657801 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-scripts\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.657838 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.657865 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-config-data\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.657978 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85ac76b-def9-4260-a408-eecd5c9a3760-log-httpd\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.658054 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.658086 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjwtn\" (UniqueName: \"kubernetes.io/projected/f85ac76b-def9-4260-a408-eecd5c9a3760-kube-api-access-fjwtn\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.658339 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.759886 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85ac76b-def9-4260-a408-eecd5c9a3760-log-httpd\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.760354 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.760532 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjwtn\" (UniqueName: \"kubernetes.io/projected/f85ac76b-def9-4260-a408-eecd5c9a3760-kube-api-access-fjwtn\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.760713 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.760892 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85ac76b-def9-4260-a408-eecd5c9a3760-run-httpd\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.761053 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-scripts\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.761204 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.761384 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-config-data\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.760741 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85ac76b-def9-4260-a408-eecd5c9a3760-log-httpd\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.762259 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85ac76b-def9-4260-a408-eecd5c9a3760-run-httpd\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.771736 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.771793 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.772458 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-scripts\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.773143 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-config-data\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.783806 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.789037 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjwtn\" (UniqueName: \"kubernetes.io/projected/f85ac76b-def9-4260-a408-eecd5c9a3760-kube-api-access-fjwtn\") pod \"ceilometer-0\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:23 crc kubenswrapper[4813]: I0217 09:04:23.907428 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:24 crc kubenswrapper[4813]: W0217 09:04:24.414139 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85ac76b_def9_4260_a408_eecd5c9a3760.slice/crio-9903d3cbf502f51d9d64cd7e9c494829b32f185b9a0c77659bea937068110db0 WatchSource:0}: Error finding container 9903d3cbf502f51d9d64cd7e9c494829b32f185b9a0c77659bea937068110db0: Status 404 returned error can't find the container with id 9903d3cbf502f51d9d64cd7e9c494829b32f185b9a0c77659bea937068110db0 Feb 17 09:04:24 crc kubenswrapper[4813]: I0217 09:04:24.414164 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:04:24 crc kubenswrapper[4813]: I0217 09:04:24.505382 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f85ac76b-def9-4260-a408-eecd5c9a3760","Type":"ContainerStarted","Data":"9903d3cbf502f51d9d64cd7e9c494829b32f185b9a0c77659bea937068110db0"} Feb 17 09:04:25 crc kubenswrapper[4813]: I0217 09:04:25.126652 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ea1d684-24a5-4f95-9e83-9da7b79bf5f7" path="/var/lib/kubelet/pods/0ea1d684-24a5-4f95-9e83-9da7b79bf5f7/volumes" Feb 17 09:04:25 crc kubenswrapper[4813]: I0217 09:04:25.515267 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f85ac76b-def9-4260-a408-eecd5c9a3760","Type":"ContainerStarted","Data":"4bfee4fa71785f5e76e8672dcd57a9b00ff7c1b0fa421af6f35f73b3d6be97da"} Feb 17 09:04:26 crc kubenswrapper[4813]: I0217 09:04:26.527682 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f85ac76b-def9-4260-a408-eecd5c9a3760","Type":"ContainerStarted","Data":"0f7e719ce704389491cdd9811b028278994f0c44ae6b9bfab65d68de1f3e3cac"} Feb 17 09:04:27 crc kubenswrapper[4813]: I0217 09:04:27.537152 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f85ac76b-def9-4260-a408-eecd5c9a3760","Type":"ContainerStarted","Data":"26dc119af732eaf13915da8dae289734e0e5ef31582b066984d10f993e81a22c"} Feb 17 09:04:28 crc kubenswrapper[4813]: I0217 09:04:28.548053 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f85ac76b-def9-4260-a408-eecd5c9a3760","Type":"ContainerStarted","Data":"bc46323a7e821438efa76315688d337626ea33fdf1ac905aec789b8be1dee3d7"} Feb 17 09:04:28 crc kubenswrapper[4813]: I0217 09:04:28.549139 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:28 crc kubenswrapper[4813]: I0217 09:04:28.567068 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.4733043 podStartE2EDuration="5.567053569s" podCreationTimestamp="2026-02-17 09:04:23 +0000 UTC" firstStartedPulling="2026-02-17 09:04:24.420810066 +0000 UTC m=+1412.081571289" lastFinishedPulling="2026-02-17 09:04:27.514559325 +0000 UTC m=+1415.175320558" observedRunningTime="2026-02-17 09:04:28.565401712 +0000 UTC m=+1416.226162935" watchObservedRunningTime="2026-02-17 09:04:28.567053569 +0000 UTC m=+1416.227814792" Feb 17 09:04:31 crc kubenswrapper[4813]: I0217 09:04:31.861955 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 17 09:04:31 crc kubenswrapper[4813]: I0217 09:04:31.862577 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/memcached-0" podUID="afb01180-51b9-47b4-8f48-8ec2ce45c286" containerName="memcached" containerID="cri-o://c82d31bb4c6e5a41c563d24bbb81ef69b0e2ba9a0bd550f036c93b34422093db" gracePeriod=30 Feb 17 09:04:31 crc kubenswrapper[4813]: I0217 09:04:31.946569 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:04:31 crc kubenswrapper[4813]: I0217 09:04:31.946838 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e4c08d2c-277c-46e7-a274-56a389113175" containerName="watcher-decision-engine" containerID="cri-o://5e84c8eae71bc17e0c67a39a6787b948175bd60ad3aeba79408066389fb0809a" gracePeriod=30 Feb 17 09:04:31 crc kubenswrapper[4813]: I0217 09:04:31.956505 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:04:31 crc kubenswrapper[4813]: I0217 09:04:31.956774 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="fb6c5674-7a47-4f50-9836-5e13d6b6ee0e" containerName="watcher-applier" containerID="cri-o://7ae2b9a9be7d3b106d42f7d963b6f1ab955679abf36954ba1376e65e0dfc6d83" gracePeriod=30 Feb 17 09:04:31 crc kubenswrapper[4813]: I0217 09:04:31.964921 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:04:31 crc kubenswrapper[4813]: I0217 09:04:31.965183 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f51f7109-2fd1-44a0-a988-276436bd1498" containerName="watcher-kuttl-api-log" containerID="cri-o://74643e846b4183f4228ee78b16b82485187ab1bdeec381ae08fcb4088e4e9110" gracePeriod=30 Feb 17 09:04:31 crc kubenswrapper[4813]: I0217 09:04:31.965462 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f51f7109-2fd1-44a0-a988-276436bd1498" containerName="watcher-api" containerID="cri-o://33626573aa204ef5fda9941c244bd161695b4873c95b6c3ee96dec46cef954f1" gracePeriod=30 Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.012852 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-xh5xx"] Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.021691 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-xh5xx"] Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.043812 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-w54dd"] Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.047599 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.055591 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-mtls" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.056893 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.080294 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-w54dd"] Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.103817 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-config-data\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.103864 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-cert-memcached-mtls\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.103896 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-combined-ca-bundle\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.103931 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-credential-keys\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.103994 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-scripts\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.104057 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdcq5\" (UniqueName: \"kubernetes.io/projected/1cac6042-b341-4c35-8049-fe8b24d07179-kube-api-access-kdcq5\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.106019 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-fernet-keys\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.207666 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-scripts\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.207750 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdcq5\" (UniqueName: \"kubernetes.io/projected/1cac6042-b341-4c35-8049-fe8b24d07179-kube-api-access-kdcq5\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.207831 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-fernet-keys\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.207869 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-config-data\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.207898 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-cert-memcached-mtls\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.207926 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-combined-ca-bundle\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.207946 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-credential-keys\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.213021 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-credential-keys\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.213021 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-scripts\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.215130 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-fernet-keys\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.217824 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-cert-memcached-mtls\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.218456 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-config-data\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.227367 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-combined-ca-bundle\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.230404 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdcq5\" (UniqueName: \"kubernetes.io/projected/1cac6042-b341-4c35-8049-fe8b24d07179-kube-api-access-kdcq5\") pod \"keystone-bootstrap-w54dd\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.371968 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.584372 4813 generic.go:334] "Generic (PLEG): container finished" podID="f51f7109-2fd1-44a0-a988-276436bd1498" containerID="74643e846b4183f4228ee78b16b82485187ab1bdeec381ae08fcb4088e4e9110" exitCode=143 Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.584611 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f51f7109-2fd1-44a0-a988-276436bd1498","Type":"ContainerDied","Data":"74643e846b4183f4228ee78b16b82485187ab1bdeec381ae08fcb4088e4e9110"} Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.903364 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-w54dd"] Feb 17 09:04:32 crc kubenswrapper[4813]: I0217 09:04:32.968558 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.019796 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/afb01180-51b9-47b4-8f48-8ec2ce45c286-memcached-tls-certs\") pod \"afb01180-51b9-47b4-8f48-8ec2ce45c286\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.019919 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb01180-51b9-47b4-8f48-8ec2ce45c286-combined-ca-bundle\") pod \"afb01180-51b9-47b4-8f48-8ec2ce45c286\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.019988 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/afb01180-51b9-47b4-8f48-8ec2ce45c286-config-data\") pod \"afb01180-51b9-47b4-8f48-8ec2ce45c286\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.020103 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/afb01180-51b9-47b4-8f48-8ec2ce45c286-kolla-config\") pod \"afb01180-51b9-47b4-8f48-8ec2ce45c286\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.020136 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j7vz\" (UniqueName: \"kubernetes.io/projected/afb01180-51b9-47b4-8f48-8ec2ce45c286-kube-api-access-8j7vz\") pod \"afb01180-51b9-47b4-8f48-8ec2ce45c286\" (UID: \"afb01180-51b9-47b4-8f48-8ec2ce45c286\") " Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.024970 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb01180-51b9-47b4-8f48-8ec2ce45c286-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "afb01180-51b9-47b4-8f48-8ec2ce45c286" (UID: "afb01180-51b9-47b4-8f48-8ec2ce45c286"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.032625 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb01180-51b9-47b4-8f48-8ec2ce45c286-config-data" (OuterVolumeSpecName: "config-data") pod "afb01180-51b9-47b4-8f48-8ec2ce45c286" (UID: "afb01180-51b9-47b4-8f48-8ec2ce45c286"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.041601 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb01180-51b9-47b4-8f48-8ec2ce45c286-kube-api-access-8j7vz" (OuterVolumeSpecName: "kube-api-access-8j7vz") pod "afb01180-51b9-47b4-8f48-8ec2ce45c286" (UID: "afb01180-51b9-47b4-8f48-8ec2ce45c286"). InnerVolumeSpecName "kube-api-access-8j7vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.056775 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb01180-51b9-47b4-8f48-8ec2ce45c286-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afb01180-51b9-47b4-8f48-8ec2ce45c286" (UID: "afb01180-51b9-47b4-8f48-8ec2ce45c286"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.129663 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/afb01180-51b9-47b4-8f48-8ec2ce45c286-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.129977 4813 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/afb01180-51b9-47b4-8f48-8ec2ce45c286-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.129992 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j7vz\" (UniqueName: \"kubernetes.io/projected/afb01180-51b9-47b4-8f48-8ec2ce45c286-kube-api-access-8j7vz\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.130004 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb01180-51b9-47b4-8f48-8ec2ce45c286-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.135396 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb01180-51b9-47b4-8f48-8ec2ce45c286-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "afb01180-51b9-47b4-8f48-8ec2ce45c286" (UID: "afb01180-51b9-47b4-8f48-8ec2ce45c286"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.147917 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f51f7109-2fd1-44a0-a988-276436bd1498" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"https://10.217.0.175:9322/\": read tcp 10.217.0.2:59038->10.217.0.175:9322: read: connection reset by peer" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.147960 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f51f7109-2fd1-44a0-a988-276436bd1498" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.175:9322/\": read tcp 10.217.0.2:59044->10.217.0.175:9322: read: connection reset by peer" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.161781 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01612876-0452-4954-8ab4-c101d091a500" path="/var/lib/kubelet/pods/01612876-0452-4954-8ab4-c101d091a500/volumes" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.233525 4813 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/afb01180-51b9-47b4-8f48-8ec2ce45c286-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:33 crc kubenswrapper[4813]: E0217 09:04:33.333796 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7ae2b9a9be7d3b106d42f7d963b6f1ab955679abf36954ba1376e65e0dfc6d83" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:04:33 crc kubenswrapper[4813]: E0217 09:04:33.335193 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7ae2b9a9be7d3b106d42f7d963b6f1ab955679abf36954ba1376e65e0dfc6d83" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:04:33 crc kubenswrapper[4813]: E0217 09:04:33.339080 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7ae2b9a9be7d3b106d42f7d963b6f1ab955679abf36954ba1376e65e0dfc6d83" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:04:33 crc kubenswrapper[4813]: E0217 09:04:33.339144 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="fb6c5674-7a47-4f50-9836-5e13d6b6ee0e" containerName="watcher-applier" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.593828 4813 generic.go:334] "Generic (PLEG): container finished" podID="afb01180-51b9-47b4-8f48-8ec2ce45c286" containerID="c82d31bb4c6e5a41c563d24bbb81ef69b0e2ba9a0bd550f036c93b34422093db" exitCode=0 Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.593911 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"afb01180-51b9-47b4-8f48-8ec2ce45c286","Type":"ContainerDied","Data":"c82d31bb4c6e5a41c563d24bbb81ef69b0e2ba9a0bd550f036c93b34422093db"} Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.593940 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"afb01180-51b9-47b4-8f48-8ec2ce45c286","Type":"ContainerDied","Data":"7a785b158823d46aa76cde6f53a8482a706da384542919397904c612656577ad"} Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.593913 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.594003 4813 scope.go:117] "RemoveContainer" containerID="c82d31bb4c6e5a41c563d24bbb81ef69b0e2ba9a0bd550f036c93b34422093db" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.595683 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-w54dd" event={"ID":"1cac6042-b341-4c35-8049-fe8b24d07179","Type":"ContainerStarted","Data":"72cf464838c665ad283dea4190e29e4562f2589d2de02902a0867d197f308370"} Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.595715 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-w54dd" event={"ID":"1cac6042-b341-4c35-8049-fe8b24d07179","Type":"ContainerStarted","Data":"5de0e48a23f605b7f0116a1ad82c489d311d9f88ec2012174db13ebd8094ba50"} Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.597593 4813 generic.go:334] "Generic (PLEG): container finished" podID="f51f7109-2fd1-44a0-a988-276436bd1498" containerID="33626573aa204ef5fda9941c244bd161695b4873c95b6c3ee96dec46cef954f1" exitCode=0 Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.597615 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f51f7109-2fd1-44a0-a988-276436bd1498","Type":"ContainerDied","Data":"33626573aa204ef5fda9941c244bd161695b4873c95b6c3ee96dec46cef954f1"} Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.624785 4813 scope.go:117] "RemoveContainer" containerID="c82d31bb4c6e5a41c563d24bbb81ef69b0e2ba9a0bd550f036c93b34422093db" Feb 17 09:04:33 crc kubenswrapper[4813]: E0217 09:04:33.625368 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c82d31bb4c6e5a41c563d24bbb81ef69b0e2ba9a0bd550f036c93b34422093db\": container with ID starting with c82d31bb4c6e5a41c563d24bbb81ef69b0e2ba9a0bd550f036c93b34422093db not found: ID does not exist" containerID="c82d31bb4c6e5a41c563d24bbb81ef69b0e2ba9a0bd550f036c93b34422093db" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.625431 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c82d31bb4c6e5a41c563d24bbb81ef69b0e2ba9a0bd550f036c93b34422093db"} err="failed to get container status \"c82d31bb4c6e5a41c563d24bbb81ef69b0e2ba9a0bd550f036c93b34422093db\": rpc error: code = NotFound desc = could not find container \"c82d31bb4c6e5a41c563d24bbb81ef69b0e2ba9a0bd550f036c93b34422093db\": container with ID starting with c82d31bb4c6e5a41c563d24bbb81ef69b0e2ba9a0bd550f036c93b34422093db not found: ID does not exist" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.627954 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-w54dd" podStartSLOduration=1.627932438 podStartE2EDuration="1.627932438s" podCreationTimestamp="2026-02-17 09:04:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:04:33.620112164 +0000 UTC m=+1421.280873387" watchObservedRunningTime="2026-02-17 09:04:33.627932438 +0000 UTC m=+1421.288693661" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.640854 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.650755 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.666577 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 17 09:04:33 crc kubenswrapper[4813]: E0217 09:04:33.666970 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb01180-51b9-47b4-8f48-8ec2ce45c286" containerName="memcached" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.666998 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb01180-51b9-47b4-8f48-8ec2ce45c286" containerName="memcached" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.667206 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb01180-51b9-47b4-8f48-8ec2ce45c286" containerName="memcached" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.667869 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.670095 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.670557 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-vcvwm" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.671119 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.692935 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.743602 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f47de837-e205-46ce-8397-77b54ea93653-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f47de837-e205-46ce-8397-77b54ea93653\") " pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.743658 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f47de837-e205-46ce-8397-77b54ea93653-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f47de837-e205-46ce-8397-77b54ea93653\") " pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.743701 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwdk8\" (UniqueName: \"kubernetes.io/projected/f47de837-e205-46ce-8397-77b54ea93653-kube-api-access-vwdk8\") pod \"memcached-0\" (UID: \"f47de837-e205-46ce-8397-77b54ea93653\") " pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.743732 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f47de837-e205-46ce-8397-77b54ea93653-config-data\") pod \"memcached-0\" (UID: \"f47de837-e205-46ce-8397-77b54ea93653\") " pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.743777 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f47de837-e205-46ce-8397-77b54ea93653-kolla-config\") pod \"memcached-0\" (UID: \"f47de837-e205-46ce-8397-77b54ea93653\") " pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.844703 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f47de837-e205-46ce-8397-77b54ea93653-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f47de837-e205-46ce-8397-77b54ea93653\") " pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.844748 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f47de837-e205-46ce-8397-77b54ea93653-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f47de837-e205-46ce-8397-77b54ea93653\") " pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.844770 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwdk8\" (UniqueName: \"kubernetes.io/projected/f47de837-e205-46ce-8397-77b54ea93653-kube-api-access-vwdk8\") pod \"memcached-0\" (UID: \"f47de837-e205-46ce-8397-77b54ea93653\") " pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.844798 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f47de837-e205-46ce-8397-77b54ea93653-config-data\") pod \"memcached-0\" (UID: \"f47de837-e205-46ce-8397-77b54ea93653\") " pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.844823 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f47de837-e205-46ce-8397-77b54ea93653-kolla-config\") pod \"memcached-0\" (UID: \"f47de837-e205-46ce-8397-77b54ea93653\") " pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.845539 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f47de837-e205-46ce-8397-77b54ea93653-kolla-config\") pod \"memcached-0\" (UID: \"f47de837-e205-46ce-8397-77b54ea93653\") " pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.845638 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f47de837-e205-46ce-8397-77b54ea93653-config-data\") pod \"memcached-0\" (UID: \"f47de837-e205-46ce-8397-77b54ea93653\") " pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.850253 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f47de837-e205-46ce-8397-77b54ea93653-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f47de837-e205-46ce-8397-77b54ea93653\") " pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.860151 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f47de837-e205-46ce-8397-77b54ea93653-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f47de837-e205-46ce-8397-77b54ea93653\") " pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.866180 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwdk8\" (UniqueName: \"kubernetes.io/projected/f47de837-e205-46ce-8397-77b54ea93653-kube-api-access-vwdk8\") pod \"memcached-0\" (UID: \"f47de837-e205-46ce-8397-77b54ea93653\") " pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.983128 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:33 crc kubenswrapper[4813]: I0217 09:04:33.987067 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.055388 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-config-data\") pod \"f51f7109-2fd1-44a0-a988-276436bd1498\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.055498 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-combined-ca-bundle\") pod \"f51f7109-2fd1-44a0-a988-276436bd1498\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.055543 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-custom-prometheus-ca\") pod \"f51f7109-2fd1-44a0-a988-276436bd1498\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.055562 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f51f7109-2fd1-44a0-a988-276436bd1498-logs\") pod \"f51f7109-2fd1-44a0-a988-276436bd1498\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.055588 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-internal-tls-certs\") pod \"f51f7109-2fd1-44a0-a988-276436bd1498\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.055671 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jclvg\" (UniqueName: \"kubernetes.io/projected/f51f7109-2fd1-44a0-a988-276436bd1498-kube-api-access-jclvg\") pod \"f51f7109-2fd1-44a0-a988-276436bd1498\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.055703 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-public-tls-certs\") pod \"f51f7109-2fd1-44a0-a988-276436bd1498\" (UID: \"f51f7109-2fd1-44a0-a988-276436bd1498\") " Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.056936 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f51f7109-2fd1-44a0-a988-276436bd1498-logs" (OuterVolumeSpecName: "logs") pod "f51f7109-2fd1-44a0-a988-276436bd1498" (UID: "f51f7109-2fd1-44a0-a988-276436bd1498"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.064174 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51f7109-2fd1-44a0-a988-276436bd1498-kube-api-access-jclvg" (OuterVolumeSpecName: "kube-api-access-jclvg") pod "f51f7109-2fd1-44a0-a988-276436bd1498" (UID: "f51f7109-2fd1-44a0-a988-276436bd1498"). InnerVolumeSpecName "kube-api-access-jclvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.078461 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f51f7109-2fd1-44a0-a988-276436bd1498" (UID: "f51f7109-2fd1-44a0-a988-276436bd1498"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.098422 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f51f7109-2fd1-44a0-a988-276436bd1498" (UID: "f51f7109-2fd1-44a0-a988-276436bd1498"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.120664 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f51f7109-2fd1-44a0-a988-276436bd1498" (UID: "f51f7109-2fd1-44a0-a988-276436bd1498"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.125975 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f51f7109-2fd1-44a0-a988-276436bd1498" (UID: "f51f7109-2fd1-44a0-a988-276436bd1498"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.126014 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-config-data" (OuterVolumeSpecName: "config-data") pod "f51f7109-2fd1-44a0-a988-276436bd1498" (UID: "f51f7109-2fd1-44a0-a988-276436bd1498"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.157833 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.158145 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.158157 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.158166 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f51f7109-2fd1-44a0-a988-276436bd1498-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.158174 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.158182 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jclvg\" (UniqueName: \"kubernetes.io/projected/f51f7109-2fd1-44a0-a988-276436bd1498-kube-api-access-jclvg\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.158191 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f7109-2fd1-44a0-a988-276436bd1498-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.459733 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Feb 17 09:04:34 crc kubenswrapper[4813]: W0217 09:04:34.472808 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf47de837_e205_46ce_8397_77b54ea93653.slice/crio-4c857cd4e43ac2ba25335349228654d4e9bbf255e9792592b1aac73e52cbe7d7 WatchSource:0}: Error finding container 4c857cd4e43ac2ba25335349228654d4e9bbf255e9792592b1aac73e52cbe7d7: Status 404 returned error can't find the container with id 4c857cd4e43ac2ba25335349228654d4e9bbf255e9792592b1aac73e52cbe7d7 Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.608454 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.608446 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f51f7109-2fd1-44a0-a988-276436bd1498","Type":"ContainerDied","Data":"5a13340e3507fa130333abdb7dff9ea69b4c485da777e9c0730f6cfab9d61352"} Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.608604 4813 scope.go:117] "RemoveContainer" containerID="33626573aa204ef5fda9941c244bd161695b4873c95b6c3ee96dec46cef954f1" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.611907 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"f47de837-e205-46ce-8397-77b54ea93653","Type":"ContainerStarted","Data":"4c857cd4e43ac2ba25335349228654d4e9bbf255e9792592b1aac73e52cbe7d7"} Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.629390 4813 scope.go:117] "RemoveContainer" containerID="74643e846b4183f4228ee78b16b82485187ab1bdeec381ae08fcb4088e4e9110" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.652363 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.687943 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.705378 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:04:34 crc kubenswrapper[4813]: E0217 09:04:34.706065 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51f7109-2fd1-44a0-a988-276436bd1498" containerName="watcher-api" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.706087 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51f7109-2fd1-44a0-a988-276436bd1498" containerName="watcher-api" Feb 17 09:04:34 crc kubenswrapper[4813]: E0217 09:04:34.706133 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51f7109-2fd1-44a0-a988-276436bd1498" containerName="watcher-kuttl-api-log" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.706140 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51f7109-2fd1-44a0-a988-276436bd1498" containerName="watcher-kuttl-api-log" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.706404 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51f7109-2fd1-44a0-a988-276436bd1498" containerName="watcher-api" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.706422 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51f7109-2fd1-44a0-a988-276436bd1498" containerName="watcher-kuttl-api-log" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.707648 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.710774 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.711081 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.711215 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.716568 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.768148 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.768207 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09ec704-8c28-487f-b90b-1a7d65252fb3-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.768292 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.768352 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.768432 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6qhq\" (UniqueName: \"kubernetes.io/projected/c09ec704-8c28-487f-b90b-1a7d65252fb3-kube-api-access-f6qhq\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.768475 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.768496 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.768521 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.869171 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.869584 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.869615 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6qhq\" (UniqueName: \"kubernetes.io/projected/c09ec704-8c28-487f-b90b-1a7d65252fb3-kube-api-access-f6qhq\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.869646 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.869666 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.869685 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.869719 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.869738 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09ec704-8c28-487f-b90b-1a7d65252fb3-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.870111 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09ec704-8c28-487f-b90b-1a7d65252fb3-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.874585 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.874639 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.875267 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.875340 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.876263 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.876980 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:34 crc kubenswrapper[4813]: I0217 09:04:34.893752 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6qhq\" (UniqueName: \"kubernetes.io/projected/c09ec704-8c28-487f-b90b-1a7d65252fb3-kube-api-access-f6qhq\") pod \"watcher-kuttl-api-0\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.028695 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.124795 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb01180-51b9-47b4-8f48-8ec2ce45c286" path="/var/lib/kubelet/pods/afb01180-51b9-47b4-8f48-8ec2ce45c286/volumes" Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.125641 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f51f7109-2fd1-44a0-a988-276436bd1498" path="/var/lib/kubelet/pods/f51f7109-2fd1-44a0-a988-276436bd1498/volumes" Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.564220 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.623783 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c09ec704-8c28-487f-b90b-1a7d65252fb3","Type":"ContainerStarted","Data":"af1f12952fb10d85a433ada56344a729cbf92add173357f241127b24032edb50"} Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.640577 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"f47de837-e205-46ce-8397-77b54ea93653","Type":"ContainerStarted","Data":"5436d239bf03384f30f02d9b797c0feaf1e6c0a30415c8941258bf140bac8e5a"} Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.641172 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.645412 4813 generic.go:334] "Generic (PLEG): container finished" podID="fb6c5674-7a47-4f50-9836-5e13d6b6ee0e" containerID="7ae2b9a9be7d3b106d42f7d963b6f1ab955679abf36954ba1376e65e0dfc6d83" exitCode=0 Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.645447 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e","Type":"ContainerDied","Data":"7ae2b9a9be7d3b106d42f7d963b6f1ab955679abf36954ba1376e65e0dfc6d83"} Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.666027 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=2.666005863 podStartE2EDuration="2.666005863s" podCreationTimestamp="2026-02-17 09:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:04:35.655822042 +0000 UTC m=+1423.316583295" watchObservedRunningTime="2026-02-17 09:04:35.666005863 +0000 UTC m=+1423.326767086" Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.735869 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.783132 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z624\" (UniqueName: \"kubernetes.io/projected/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-kube-api-access-2z624\") pod \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\" (UID: \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\") " Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.783253 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-config-data\") pod \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\" (UID: \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\") " Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.783344 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-logs\") pod \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\" (UID: \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\") " Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.783983 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-combined-ca-bundle\") pod \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\" (UID: \"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e\") " Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.785360 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-logs" (OuterVolumeSpecName: "logs") pod "fb6c5674-7a47-4f50-9836-5e13d6b6ee0e" (UID: "fb6c5674-7a47-4f50-9836-5e13d6b6ee0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.788543 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-kube-api-access-2z624" (OuterVolumeSpecName: "kube-api-access-2z624") pod "fb6c5674-7a47-4f50-9836-5e13d6b6ee0e" (UID: "fb6c5674-7a47-4f50-9836-5e13d6b6ee0e"). InnerVolumeSpecName "kube-api-access-2z624". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.832191 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb6c5674-7a47-4f50-9836-5e13d6b6ee0e" (UID: "fb6c5674-7a47-4f50-9836-5e13d6b6ee0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.852914 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-config-data" (OuterVolumeSpecName: "config-data") pod "fb6c5674-7a47-4f50-9836-5e13d6b6ee0e" (UID: "fb6c5674-7a47-4f50-9836-5e13d6b6ee0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.885689 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.885720 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.885732 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:35 crc kubenswrapper[4813]: I0217 09:04:35.885745 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z624\" (UniqueName: \"kubernetes.io/projected/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e-kube-api-access-2z624\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.655777 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"fb6c5674-7a47-4f50-9836-5e13d6b6ee0e","Type":"ContainerDied","Data":"c311210bf9bfae4acf6a41962e036d9b5612e2ecd5d314c451fd7549efaaa07c"} Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.655819 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.656200 4813 scope.go:117] "RemoveContainer" containerID="7ae2b9a9be7d3b106d42f7d963b6f1ab955679abf36954ba1376e65e0dfc6d83" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.658402 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c09ec704-8c28-487f-b90b-1a7d65252fb3","Type":"ContainerStarted","Data":"237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f"} Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.658443 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c09ec704-8c28-487f-b90b-1a7d65252fb3","Type":"ContainerStarted","Data":"2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798"} Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.658467 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.663903 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cac6042-b341-4c35-8049-fe8b24d07179" containerID="72cf464838c665ad283dea4190e29e4562f2589d2de02902a0867d197f308370" exitCode=0 Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.665220 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-w54dd" event={"ID":"1cac6042-b341-4c35-8049-fe8b24d07179","Type":"ContainerDied","Data":"72cf464838c665ad283dea4190e29e4562f2589d2de02902a0867d197f308370"} Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.701980 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.701955496 podStartE2EDuration="2.701955496s" podCreationTimestamp="2026-02-17 09:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:04:36.68604491 +0000 UTC m=+1424.346806173" watchObservedRunningTime="2026-02-17 09:04:36.701955496 +0000 UTC m=+1424.362716729" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.732977 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.752253 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.773628 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:04:36 crc kubenswrapper[4813]: E0217 09:04:36.774036 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb6c5674-7a47-4f50-9836-5e13d6b6ee0e" containerName="watcher-applier" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.774055 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6c5674-7a47-4f50-9836-5e13d6b6ee0e" containerName="watcher-applier" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.774277 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb6c5674-7a47-4f50-9836-5e13d6b6ee0e" containerName="watcher-applier" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.774962 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.785566 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.786965 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.810035 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.810227 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cab1458-f302-4222-8023-e01ed086c0d1-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.810292 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w8zh\" (UniqueName: \"kubernetes.io/projected/2cab1458-f302-4222-8023-e01ed086c0d1-kube-api-access-8w8zh\") pod \"watcher-kuttl-applier-0\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.810419 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.810500 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:36 crc kubenswrapper[4813]: E0217 09:04:36.817991 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb6c5674_7a47_4f50_9836_5e13d6b6ee0e.slice\": RecentStats: unable to find data in memory cache]" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.913343 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.913430 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.913507 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.913576 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cab1458-f302-4222-8023-e01ed086c0d1-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.913604 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w8zh\" (UniqueName: \"kubernetes.io/projected/2cab1458-f302-4222-8023-e01ed086c0d1-kube-api-access-8w8zh\") pod \"watcher-kuttl-applier-0\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.915845 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cab1458-f302-4222-8023-e01ed086c0d1-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.919167 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.919550 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.933342 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:36 crc kubenswrapper[4813]: I0217 09:04:36.940501 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w8zh\" (UniqueName: \"kubernetes.io/projected/2cab1458-f302-4222-8023-e01ed086c0d1-kube-api-access-8w8zh\") pod \"watcher-kuttl-applier-0\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:37 crc kubenswrapper[4813]: I0217 09:04:37.119959 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:37 crc kubenswrapper[4813]: I0217 09:04:37.131278 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb6c5674-7a47-4f50-9836-5e13d6b6ee0e" path="/var/lib/kubelet/pods/fb6c5674-7a47-4f50-9836-5e13d6b6ee0e/volumes" Feb 17 09:04:37 crc kubenswrapper[4813]: W0217 09:04:37.661424 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cab1458_f302_4222_8023_e01ed086c0d1.slice/crio-c8fb25dd5de8997dad7fe973c364b912e0c234126a542667e64b048884bdab86 WatchSource:0}: Error finding container c8fb25dd5de8997dad7fe973c364b912e0c234126a542667e64b048884bdab86: Status 404 returned error can't find the container with id c8fb25dd5de8997dad7fe973c364b912e0c234126a542667e64b048884bdab86 Feb 17 09:04:37 crc kubenswrapper[4813]: I0217 09:04:37.673448 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:04:37 crc kubenswrapper[4813]: I0217 09:04:37.677859 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2cab1458-f302-4222-8023-e01ed086c0d1","Type":"ContainerStarted","Data":"c8fb25dd5de8997dad7fe973c364b912e0c234126a542667e64b048884bdab86"} Feb 17 09:04:37 crc kubenswrapper[4813]: I0217 09:04:37.993195 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.040037 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-config-data\") pod \"1cac6042-b341-4c35-8049-fe8b24d07179\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.040114 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdcq5\" (UniqueName: \"kubernetes.io/projected/1cac6042-b341-4c35-8049-fe8b24d07179-kube-api-access-kdcq5\") pod \"1cac6042-b341-4c35-8049-fe8b24d07179\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.040151 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-combined-ca-bundle\") pod \"1cac6042-b341-4c35-8049-fe8b24d07179\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.040216 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-cert-memcached-mtls\") pod \"1cac6042-b341-4c35-8049-fe8b24d07179\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.040262 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-credential-keys\") pod \"1cac6042-b341-4c35-8049-fe8b24d07179\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.040478 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-scripts\") pod \"1cac6042-b341-4c35-8049-fe8b24d07179\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.040511 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-fernet-keys\") pod \"1cac6042-b341-4c35-8049-fe8b24d07179\" (UID: \"1cac6042-b341-4c35-8049-fe8b24d07179\") " Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.046686 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1cac6042-b341-4c35-8049-fe8b24d07179" (UID: "1cac6042-b341-4c35-8049-fe8b24d07179"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.049808 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cac6042-b341-4c35-8049-fe8b24d07179-kube-api-access-kdcq5" (OuterVolumeSpecName: "kube-api-access-kdcq5") pod "1cac6042-b341-4c35-8049-fe8b24d07179" (UID: "1cac6042-b341-4c35-8049-fe8b24d07179"). InnerVolumeSpecName "kube-api-access-kdcq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.053779 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1cac6042-b341-4c35-8049-fe8b24d07179" (UID: "1cac6042-b341-4c35-8049-fe8b24d07179"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.056497 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-scripts" (OuterVolumeSpecName: "scripts") pod "1cac6042-b341-4c35-8049-fe8b24d07179" (UID: "1cac6042-b341-4c35-8049-fe8b24d07179"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.082643 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cac6042-b341-4c35-8049-fe8b24d07179" (UID: "1cac6042-b341-4c35-8049-fe8b24d07179"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.085734 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-config-data" (OuterVolumeSpecName: "config-data") pod "1cac6042-b341-4c35-8049-fe8b24d07179" (UID: "1cac6042-b341-4c35-8049-fe8b24d07179"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.119533 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "1cac6042-b341-4c35-8049-fe8b24d07179" (UID: "1cac6042-b341-4c35-8049-fe8b24d07179"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.142613 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.142652 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.142668 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.142682 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdcq5\" (UniqueName: \"kubernetes.io/projected/1cac6042-b341-4c35-8049-fe8b24d07179-kube-api-access-kdcq5\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.142695 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.142706 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.142718 4813 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1cac6042-b341-4c35-8049-fe8b24d07179-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:38 crc kubenswrapper[4813]: E0217 09:04:38.154576 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e84c8eae71bc17e0c67a39a6787b948175bd60ad3aeba79408066389fb0809a" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 17 09:04:38 crc kubenswrapper[4813]: E0217 09:04:38.156326 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e84c8eae71bc17e0c67a39a6787b948175bd60ad3aeba79408066389fb0809a" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 17 09:04:38 crc kubenswrapper[4813]: E0217 09:04:38.160955 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e84c8eae71bc17e0c67a39a6787b948175bd60ad3aeba79408066389fb0809a" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 17 09:04:38 crc kubenswrapper[4813]: E0217 09:04:38.161028 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e4c08d2c-277c-46e7-a274-56a389113175" containerName="watcher-decision-engine" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.693775 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-w54dd" event={"ID":"1cac6042-b341-4c35-8049-fe8b24d07179","Type":"ContainerDied","Data":"5de0e48a23f605b7f0116a1ad82c489d311d9f88ec2012174db13ebd8094ba50"} Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.693832 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5de0e48a23f605b7f0116a1ad82c489d311d9f88ec2012174db13ebd8094ba50" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.693910 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-w54dd" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.710630 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2cab1458-f302-4222-8023-e01ed086c0d1","Type":"ContainerStarted","Data":"e96bf8bcb33f896323a001be7234bbc18e84e3f96ed50a67dde9ad383fad5b4a"} Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.748593 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.748557915 podStartE2EDuration="2.748557915s" podCreationTimestamp="2026-02-17 09:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:04:38.736197021 +0000 UTC m=+1426.396958244" watchObservedRunningTime="2026-02-17 09:04:38.748557915 +0000 UTC m=+1426.409319138" Feb 17 09:04:38 crc kubenswrapper[4813]: I0217 09:04:38.891827 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:40 crc kubenswrapper[4813]: I0217 09:04:40.029540 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.365153 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.409745 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-combined-ca-bundle\") pod \"e4c08d2c-277c-46e7-a274-56a389113175\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.409791 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ntl2\" (UniqueName: \"kubernetes.io/projected/e4c08d2c-277c-46e7-a274-56a389113175-kube-api-access-4ntl2\") pod \"e4c08d2c-277c-46e7-a274-56a389113175\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.409870 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-config-data\") pod \"e4c08d2c-277c-46e7-a274-56a389113175\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.409964 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4c08d2c-277c-46e7-a274-56a389113175-logs\") pod \"e4c08d2c-277c-46e7-a274-56a389113175\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.409982 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-custom-prometheus-ca\") pod \"e4c08d2c-277c-46e7-a274-56a389113175\" (UID: \"e4c08d2c-277c-46e7-a274-56a389113175\") " Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.410279 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4c08d2c-277c-46e7-a274-56a389113175-logs" (OuterVolumeSpecName: "logs") pod "e4c08d2c-277c-46e7-a274-56a389113175" (UID: "e4c08d2c-277c-46e7-a274-56a389113175"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.430511 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c08d2c-277c-46e7-a274-56a389113175-kube-api-access-4ntl2" (OuterVolumeSpecName: "kube-api-access-4ntl2") pod "e4c08d2c-277c-46e7-a274-56a389113175" (UID: "e4c08d2c-277c-46e7-a274-56a389113175"). InnerVolumeSpecName "kube-api-access-4ntl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.467468 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "e4c08d2c-277c-46e7-a274-56a389113175" (UID: "e4c08d2c-277c-46e7-a274-56a389113175"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.469815 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4c08d2c-277c-46e7-a274-56a389113175" (UID: "e4c08d2c-277c-46e7-a274-56a389113175"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.484979 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-config-data" (OuterVolumeSpecName: "config-data") pod "e4c08d2c-277c-46e7-a274-56a389113175" (UID: "e4c08d2c-277c-46e7-a274-56a389113175"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.512096 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.512136 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ntl2\" (UniqueName: \"kubernetes.io/projected/e4c08d2c-277c-46e7-a274-56a389113175-kube-api-access-4ntl2\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.512147 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.512156 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4c08d2c-277c-46e7-a274-56a389113175-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.512164 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e4c08d2c-277c-46e7-a274-56a389113175-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.734994 4813 generic.go:334] "Generic (PLEG): container finished" podID="e4c08d2c-277c-46e7-a274-56a389113175" containerID="5e84c8eae71bc17e0c67a39a6787b948175bd60ad3aeba79408066389fb0809a" exitCode=0 Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.735059 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.735068 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e4c08d2c-277c-46e7-a274-56a389113175","Type":"ContainerDied","Data":"5e84c8eae71bc17e0c67a39a6787b948175bd60ad3aeba79408066389fb0809a"} Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.735476 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e4c08d2c-277c-46e7-a274-56a389113175","Type":"ContainerDied","Data":"88a1a8114a0602b503b1333ff366205a66a518ac54056de6ed8d96fe20d371a8"} Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.735497 4813 scope.go:117] "RemoveContainer" containerID="5e84c8eae71bc17e0c67a39a6787b948175bd60ad3aeba79408066389fb0809a" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.765394 4813 scope.go:117] "RemoveContainer" containerID="5e84c8eae71bc17e0c67a39a6787b948175bd60ad3aeba79408066389fb0809a" Feb 17 09:04:41 crc kubenswrapper[4813]: E0217 09:04:41.765980 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e84c8eae71bc17e0c67a39a6787b948175bd60ad3aeba79408066389fb0809a\": container with ID starting with 5e84c8eae71bc17e0c67a39a6787b948175bd60ad3aeba79408066389fb0809a not found: ID does not exist" containerID="5e84c8eae71bc17e0c67a39a6787b948175bd60ad3aeba79408066389fb0809a" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.766008 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e84c8eae71bc17e0c67a39a6787b948175bd60ad3aeba79408066389fb0809a"} err="failed to get container status \"5e84c8eae71bc17e0c67a39a6787b948175bd60ad3aeba79408066389fb0809a\": rpc error: code = NotFound desc = could not find container \"5e84c8eae71bc17e0c67a39a6787b948175bd60ad3aeba79408066389fb0809a\": container with ID starting with 5e84c8eae71bc17e0c67a39a6787b948175bd60ad3aeba79408066389fb0809a not found: ID does not exist" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.779344 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.794460 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.803423 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:04:41 crc kubenswrapper[4813]: E0217 09:04:41.803942 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c08d2c-277c-46e7-a274-56a389113175" containerName="watcher-decision-engine" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.803968 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c08d2c-277c-46e7-a274-56a389113175" containerName="watcher-decision-engine" Feb 17 09:04:41 crc kubenswrapper[4813]: E0217 09:04:41.803994 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cac6042-b341-4c35-8049-fe8b24d07179" containerName="keystone-bootstrap" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.804003 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cac6042-b341-4c35-8049-fe8b24d07179" containerName="keystone-bootstrap" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.804415 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cac6042-b341-4c35-8049-fe8b24d07179" containerName="keystone-bootstrap" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.804445 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c08d2c-277c-46e7-a274-56a389113175" containerName="watcher-decision-engine" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.805117 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.807588 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.810711 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.919236 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.919433 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.919526 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.919596 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b06587ba-6195-4568-a151-63d8b78e2d68-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.919696 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6h9x\" (UniqueName: \"kubernetes.io/projected/b06587ba-6195-4568-a151-63d8b78e2d68-kube-api-access-x6h9x\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:41 crc kubenswrapper[4813]: I0217 09:04:41.919790 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:42 crc kubenswrapper[4813]: I0217 09:04:42.021212 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6h9x\" (UniqueName: \"kubernetes.io/projected/b06587ba-6195-4568-a151-63d8b78e2d68-kube-api-access-x6h9x\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:42 crc kubenswrapper[4813]: I0217 09:04:42.021661 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:42 crc kubenswrapper[4813]: I0217 09:04:42.021845 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:42 crc kubenswrapper[4813]: I0217 09:04:42.021946 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:42 crc kubenswrapper[4813]: I0217 09:04:42.022001 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:42 crc kubenswrapper[4813]: I0217 09:04:42.022048 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b06587ba-6195-4568-a151-63d8b78e2d68-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:42 crc kubenswrapper[4813]: I0217 09:04:42.022801 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b06587ba-6195-4568-a151-63d8b78e2d68-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:42 crc kubenswrapper[4813]: I0217 09:04:42.026034 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:42 crc kubenswrapper[4813]: I0217 09:04:42.035186 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:42 crc kubenswrapper[4813]: I0217 09:04:42.035335 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:42 crc kubenswrapper[4813]: I0217 09:04:42.040985 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:42 crc kubenswrapper[4813]: I0217 09:04:42.046787 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6h9x\" (UniqueName: \"kubernetes.io/projected/b06587ba-6195-4568-a151-63d8b78e2d68-kube-api-access-x6h9x\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:42 crc kubenswrapper[4813]: I0217 09:04:42.121463 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:42 crc kubenswrapper[4813]: I0217 09:04:42.172750 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:42 crc kubenswrapper[4813]: I0217 09:04:42.655228 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:04:42 crc kubenswrapper[4813]: I0217 09:04:42.750232 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"b06587ba-6195-4568-a151-63d8b78e2d68","Type":"ContainerStarted","Data":"afefa61171a454dc5a2ffee31ab07228773aea9d36fea2a8876b1b7af419b030"} Feb 17 09:04:43 crc kubenswrapper[4813]: I0217 09:04:43.121519 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4c08d2c-277c-46e7-a274-56a389113175" path="/var/lib/kubelet/pods/e4c08d2c-277c-46e7-a274-56a389113175/volumes" Feb 17 09:04:43 crc kubenswrapper[4813]: I0217 09:04:43.763774 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"b06587ba-6195-4568-a151-63d8b78e2d68","Type":"ContainerStarted","Data":"cc4c759da142322af14576e6e0e066f17c0388ab25372aee6ca9d8f98d1c6dec"} Feb 17 09:04:43 crc kubenswrapper[4813]: I0217 09:04:43.796514 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.796498263 podStartE2EDuration="2.796498263s" podCreationTimestamp="2026-02-17 09:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:04:43.786353972 +0000 UTC m=+1431.447115205" watchObservedRunningTime="2026-02-17 09:04:43.796498263 +0000 UTC m=+1431.457259486" Feb 17 09:04:43 crc kubenswrapper[4813]: I0217 09:04:43.989646 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.153982 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-ffc69f97c-tr2zk"] Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.155385 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.179703 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-ffc69f97c-tr2zk"] Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.277724 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn64x\" (UniqueName: \"kubernetes.io/projected/f6f5f623-f902-4898-8ad7-15ae7d031197-kube-api-access-cn64x\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.277857 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-combined-ca-bundle\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.277923 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-cert-memcached-mtls\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.277964 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-public-tls-certs\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.278208 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-fernet-keys\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.278459 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-internal-tls-certs\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.278558 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-config-data\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.278674 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-credential-keys\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.278720 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-scripts\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.380241 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-fernet-keys\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.380599 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-internal-tls-certs\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.380681 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-config-data\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.380830 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-credential-keys\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.380980 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-scripts\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.381021 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn64x\" (UniqueName: \"kubernetes.io/projected/f6f5f623-f902-4898-8ad7-15ae7d031197-kube-api-access-cn64x\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.381255 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-combined-ca-bundle\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.381472 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-cert-memcached-mtls\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.381609 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-public-tls-certs\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.394990 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-credential-keys\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.395089 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-fernet-keys\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.395142 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-config-data\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.395175 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-internal-tls-certs\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.395364 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-scripts\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.395752 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-combined-ca-bundle\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.395859 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-cert-memcached-mtls\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.397107 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f5f623-f902-4898-8ad7-15ae7d031197-public-tls-certs\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.409446 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn64x\" (UniqueName: \"kubernetes.io/projected/f6f5f623-f902-4898-8ad7-15ae7d031197-kube-api-access-cn64x\") pod \"keystone-ffc69f97c-tr2zk\" (UID: \"f6f5f623-f902-4898-8ad7-15ae7d031197\") " pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.509973 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:44 crc kubenswrapper[4813]: I0217 09:04:44.954053 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-ffc69f97c-tr2zk"] Feb 17 09:04:44 crc kubenswrapper[4813]: W0217 09:04:44.958944 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6f5f623_f902_4898_8ad7_15ae7d031197.slice/crio-be41696e6d4241c383472b1eab30df849f192b183e3e9ce138faacf0fff555b8 WatchSource:0}: Error finding container be41696e6d4241c383472b1eab30df849f192b183e3e9ce138faacf0fff555b8: Status 404 returned error can't find the container with id be41696e6d4241c383472b1eab30df849f192b183e3e9ce138faacf0fff555b8 Feb 17 09:04:45 crc kubenswrapper[4813]: I0217 09:04:45.030774 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:45 crc kubenswrapper[4813]: I0217 09:04:45.058369 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:45 crc kubenswrapper[4813]: I0217 09:04:45.784665 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" event={"ID":"f6f5f623-f902-4898-8ad7-15ae7d031197","Type":"ContainerStarted","Data":"fabca9a923caf00230c8e99267b45cb28812043c2d0652466e6b3b7da3d9b980"} Feb 17 09:04:45 crc kubenswrapper[4813]: I0217 09:04:45.784724 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" event={"ID":"f6f5f623-f902-4898-8ad7-15ae7d031197","Type":"ContainerStarted","Data":"be41696e6d4241c383472b1eab30df849f192b183e3e9ce138faacf0fff555b8"} Feb 17 09:04:45 crc kubenswrapper[4813]: I0217 09:04:45.791096 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:45 crc kubenswrapper[4813]: I0217 09:04:45.813980 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" podStartSLOduration=1.8139527979999999 podStartE2EDuration="1.813952798s" podCreationTimestamp="2026-02-17 09:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:04:45.806952198 +0000 UTC m=+1433.467713461" watchObservedRunningTime="2026-02-17 09:04:45.813952798 +0000 UTC m=+1433.474714021" Feb 17 09:04:46 crc kubenswrapper[4813]: E0217 09:04:46.375542 4813 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.113:40590->38.102.83.113:33079: write tcp 38.102.83.113:40590->38.102.83.113:33079: write: broken pipe Feb 17 09:04:46 crc kubenswrapper[4813]: I0217 09:04:46.796264 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:04:47 crc kubenswrapper[4813]: I0217 09:04:47.122116 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:47 crc kubenswrapper[4813]: I0217 09:04:47.145917 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:47 crc kubenswrapper[4813]: I0217 09:04:47.754981 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:04:47 crc kubenswrapper[4813]: I0217 09:04:47.804123 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c09ec704-8c28-487f-b90b-1a7d65252fb3" containerName="watcher-kuttl-api-log" containerID="cri-o://2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798" gracePeriod=30 Feb 17 09:04:47 crc kubenswrapper[4813]: I0217 09:04:47.804151 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c09ec704-8c28-487f-b90b-1a7d65252fb3" containerName="watcher-api" containerID="cri-o://237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f" gracePeriod=30 Feb 17 09:04:47 crc kubenswrapper[4813]: I0217 09:04:47.832414 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.697885 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.763911 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-public-tls-certs\") pod \"c09ec704-8c28-487f-b90b-1a7d65252fb3\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.763975 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-internal-tls-certs\") pod \"c09ec704-8c28-487f-b90b-1a7d65252fb3\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.764024 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09ec704-8c28-487f-b90b-1a7d65252fb3-logs\") pod \"c09ec704-8c28-487f-b90b-1a7d65252fb3\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.764048 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-combined-ca-bundle\") pod \"c09ec704-8c28-487f-b90b-1a7d65252fb3\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.764091 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-config-data\") pod \"c09ec704-8c28-487f-b90b-1a7d65252fb3\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.764142 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-cert-memcached-mtls\") pod \"c09ec704-8c28-487f-b90b-1a7d65252fb3\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.764169 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-custom-prometheus-ca\") pod \"c09ec704-8c28-487f-b90b-1a7d65252fb3\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.764205 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6qhq\" (UniqueName: \"kubernetes.io/projected/c09ec704-8c28-487f-b90b-1a7d65252fb3-kube-api-access-f6qhq\") pod \"c09ec704-8c28-487f-b90b-1a7d65252fb3\" (UID: \"c09ec704-8c28-487f-b90b-1a7d65252fb3\") " Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.766047 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c09ec704-8c28-487f-b90b-1a7d65252fb3-logs" (OuterVolumeSpecName: "logs") pod "c09ec704-8c28-487f-b90b-1a7d65252fb3" (UID: "c09ec704-8c28-487f-b90b-1a7d65252fb3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.771545 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09ec704-8c28-487f-b90b-1a7d65252fb3-kube-api-access-f6qhq" (OuterVolumeSpecName: "kube-api-access-f6qhq") pod "c09ec704-8c28-487f-b90b-1a7d65252fb3" (UID: "c09ec704-8c28-487f-b90b-1a7d65252fb3"). InnerVolumeSpecName "kube-api-access-f6qhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.795386 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c09ec704-8c28-487f-b90b-1a7d65252fb3" (UID: "c09ec704-8c28-487f-b90b-1a7d65252fb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.795468 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c09ec704-8c28-487f-b90b-1a7d65252fb3" (UID: "c09ec704-8c28-487f-b90b-1a7d65252fb3"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.817242 4813 generic.go:334] "Generic (PLEG): container finished" podID="c09ec704-8c28-487f-b90b-1a7d65252fb3" containerID="237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f" exitCode=0 Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.817268 4813 generic.go:334] "Generic (PLEG): container finished" podID="c09ec704-8c28-487f-b90b-1a7d65252fb3" containerID="2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798" exitCode=143 Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.818084 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.818536 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c09ec704-8c28-487f-b90b-1a7d65252fb3","Type":"ContainerDied","Data":"237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f"} Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.818561 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c09ec704-8c28-487f-b90b-1a7d65252fb3","Type":"ContainerDied","Data":"2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798"} Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.818572 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c09ec704-8c28-487f-b90b-1a7d65252fb3","Type":"ContainerDied","Data":"af1f12952fb10d85a433ada56344a729cbf92add173357f241127b24032edb50"} Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.818586 4813 scope.go:117] "RemoveContainer" containerID="237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.828462 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c09ec704-8c28-487f-b90b-1a7d65252fb3" (UID: "c09ec704-8c28-487f-b90b-1a7d65252fb3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.849491 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-config-data" (OuterVolumeSpecName: "config-data") pod "c09ec704-8c28-487f-b90b-1a7d65252fb3" (UID: "c09ec704-8c28-487f-b90b-1a7d65252fb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.858638 4813 scope.go:117] "RemoveContainer" containerID="2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.865217 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c09ec704-8c28-487f-b90b-1a7d65252fb3" (UID: "c09ec704-8c28-487f-b90b-1a7d65252fb3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.866356 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6qhq\" (UniqueName: \"kubernetes.io/projected/c09ec704-8c28-487f-b90b-1a7d65252fb3-kube-api-access-f6qhq\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.866374 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.866387 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.866398 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09ec704-8c28-487f-b90b-1a7d65252fb3-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.866409 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.866420 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.866431 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.868270 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "c09ec704-8c28-487f-b90b-1a7d65252fb3" (UID: "c09ec704-8c28-487f-b90b-1a7d65252fb3"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.884341 4813 scope.go:117] "RemoveContainer" containerID="237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f" Feb 17 09:04:49 crc kubenswrapper[4813]: E0217 09:04:48.886833 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f\": container with ID starting with 237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f not found: ID does not exist" containerID="237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.886914 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f"} err="failed to get container status \"237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f\": rpc error: code = NotFound desc = could not find container \"237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f\": container with ID starting with 237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f not found: ID does not exist" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.886950 4813 scope.go:117] "RemoveContainer" containerID="2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798" Feb 17 09:04:49 crc kubenswrapper[4813]: E0217 09:04:48.887389 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798\": container with ID starting with 2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798 not found: ID does not exist" containerID="2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.887461 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798"} err="failed to get container status \"2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798\": rpc error: code = NotFound desc = could not find container \"2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798\": container with ID starting with 2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798 not found: ID does not exist" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.887497 4813 scope.go:117] "RemoveContainer" containerID="237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.887762 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f"} err="failed to get container status \"237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f\": rpc error: code = NotFound desc = could not find container \"237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f\": container with ID starting with 237d11b5aef0b14bc1c70476fca40383571955d63519909bef827eaddc22f90f not found: ID does not exist" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.887784 4813 scope.go:117] "RemoveContainer" containerID="2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.888064 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798"} err="failed to get container status \"2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798\": rpc error: code = NotFound desc = could not find container \"2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798\": container with ID starting with 2d460d3f53db9355b8cbf387997e7abaf35f5a0900ed1ab2d81169236f18d798 not found: ID does not exist" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:48.967756 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c09ec704-8c28-487f-b90b-1a7d65252fb3-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.152433 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.159485 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.185562 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:04:49 crc kubenswrapper[4813]: E0217 09:04:49.186089 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09ec704-8c28-487f-b90b-1a7d65252fb3" containerName="watcher-api" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.186115 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09ec704-8c28-487f-b90b-1a7d65252fb3" containerName="watcher-api" Feb 17 09:04:49 crc kubenswrapper[4813]: E0217 09:04:49.186145 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09ec704-8c28-487f-b90b-1a7d65252fb3" containerName="watcher-kuttl-api-log" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.186158 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09ec704-8c28-487f-b90b-1a7d65252fb3" containerName="watcher-kuttl-api-log" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.186450 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09ec704-8c28-487f-b90b-1a7d65252fb3" containerName="watcher-kuttl-api-log" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.186482 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09ec704-8c28-487f-b90b-1a7d65252fb3" containerName="watcher-api" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.187934 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.199546 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.204829 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.272540 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/643617fd-2c29-4236-b1df-4ab576203b19-logs\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.272744 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.272805 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.273036 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2j82\" (UniqueName: \"kubernetes.io/projected/643617fd-2c29-4236-b1df-4ab576203b19-kube-api-access-h2j82\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.273127 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.273188 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.375119 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.375177 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.375245 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2j82\" (UniqueName: \"kubernetes.io/projected/643617fd-2c29-4236-b1df-4ab576203b19-kube-api-access-h2j82\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.375285 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.375340 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.375375 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/643617fd-2c29-4236-b1df-4ab576203b19-logs\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.375972 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/643617fd-2c29-4236-b1df-4ab576203b19-logs\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.379427 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.379875 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.382952 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.384165 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.390995 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2j82\" (UniqueName: \"kubernetes.io/projected/643617fd-2c29-4236-b1df-4ab576203b19-kube-api-access-h2j82\") pod \"watcher-kuttl-api-0\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.515223 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:49 crc kubenswrapper[4813]: I0217 09:04:49.980754 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:04:51 crc kubenswrapper[4813]: I0217 09:04:51.317622 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c09ec704-8c28-487f-b90b-1a7d65252fb3" path="/var/lib/kubelet/pods/c09ec704-8c28-487f-b90b-1a7d65252fb3/volumes" Feb 17 09:04:51 crc kubenswrapper[4813]: I0217 09:04:51.319556 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:51 crc kubenswrapper[4813]: I0217 09:04:51.320475 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"643617fd-2c29-4236-b1df-4ab576203b19","Type":"ContainerStarted","Data":"df390af95be1f9c3cf24c65f40149470c77063bedd87245f46928206502aa1e6"} Feb 17 09:04:51 crc kubenswrapper[4813]: I0217 09:04:51.320557 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"643617fd-2c29-4236-b1df-4ab576203b19","Type":"ContainerStarted","Data":"35f62af6bd7832b1dcce539d04b1f1726616a081be18b502c7ad5b4ce87bac8e"} Feb 17 09:04:51 crc kubenswrapper[4813]: I0217 09:04:51.320625 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"643617fd-2c29-4236-b1df-4ab576203b19","Type":"ContainerStarted","Data":"2c238b017d76eca66cff5894e3452fd43ab26a5ec8a56cbf767571c863d8cabb"} Feb 17 09:04:51 crc kubenswrapper[4813]: I0217 09:04:51.348426 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.34837542 podStartE2EDuration="2.34837542s" podCreationTimestamp="2026-02-17 09:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:04:51.330858869 +0000 UTC m=+1438.991620132" watchObservedRunningTime="2026-02-17 09:04:51.34837542 +0000 UTC m=+1439.009136653" Feb 17 09:04:52 crc kubenswrapper[4813]: I0217 09:04:52.173345 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:52 crc kubenswrapper[4813]: I0217 09:04:52.201771 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:52 crc kubenswrapper[4813]: I0217 09:04:52.313008 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:52 crc kubenswrapper[4813]: I0217 09:04:52.343628 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:04:53 crc kubenswrapper[4813]: I0217 09:04:53.552947 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:53 crc kubenswrapper[4813]: I0217 09:04:53.923077 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:04:54 crc kubenswrapper[4813]: I0217 09:04:54.346570 4813 scope.go:117] "RemoveContainer" containerID="34e4bbc6c62eb618eb95c3966aa0a057ce4ca1b60b558d976e59b34e43cd8860" Feb 17 09:04:54 crc kubenswrapper[4813]: I0217 09:04:54.516695 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:59 crc kubenswrapper[4813]: I0217 09:04:59.516160 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:04:59 crc kubenswrapper[4813]: I0217 09:04:59.520613 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:05:00 crc kubenswrapper[4813]: I0217 09:05:00.398907 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:05:05 crc kubenswrapper[4813]: I0217 09:05:05.165282 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 09:05:05 crc kubenswrapper[4813]: I0217 09:05:05.165718 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 09:05:15 crc kubenswrapper[4813]: I0217 09:05:15.911811 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-ffc69f97c-tr2zk" Feb 17 09:05:16 crc kubenswrapper[4813]: I0217 09:05:16.015042 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-86bc75976-kcxmb"] Feb 17 09:05:16 crc kubenswrapper[4813]: I0217 09:05:16.026786 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" podUID="ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742" containerName="keystone-api" containerID="cri-o://b8724b5d7e5684c62837c463d4092d4df26104c55f3606d43afb97e56e15a887" gracePeriod=30 Feb 17 09:05:19 crc kubenswrapper[4813]: I0217 09:05:19.581683 4813 generic.go:334] "Generic (PLEG): container finished" podID="ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742" containerID="b8724b5d7e5684c62837c463d4092d4df26104c55f3606d43afb97e56e15a887" exitCode=0 Feb 17 09:05:19 crc kubenswrapper[4813]: I0217 09:05:19.581746 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" event={"ID":"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742","Type":"ContainerDied","Data":"b8724b5d7e5684c62837c463d4092d4df26104c55f3606d43afb97e56e15a887"} Feb 17 09:05:19 crc kubenswrapper[4813]: I0217 09:05:19.943475 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.107847 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-scripts\") pod \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.107904 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqws5\" (UniqueName: \"kubernetes.io/projected/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-kube-api-access-tqws5\") pod \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.107950 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-fernet-keys\") pod \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.107971 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-public-tls-certs\") pod \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.107990 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-combined-ca-bundle\") pod \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.108051 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-credential-keys\") pod \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.108088 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-internal-tls-certs\") pod \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.108188 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-config-data\") pod \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\" (UID: \"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742\") " Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.113329 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-scripts" (OuterVolumeSpecName: "scripts") pod "ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742" (UID: "ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.115443 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742" (UID: "ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.118425 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-kube-api-access-tqws5" (OuterVolumeSpecName: "kube-api-access-tqws5") pod "ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742" (UID: "ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742"). InnerVolumeSpecName "kube-api-access-tqws5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.135459 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742" (UID: "ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.140498 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742" (UID: "ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.142506 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-config-data" (OuterVolumeSpecName: "config-data") pod "ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742" (UID: "ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.154693 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742" (UID: "ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.160452 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742" (UID: "ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.209882 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.209917 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.209929 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.209943 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqws5\" (UniqueName: \"kubernetes.io/projected/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-kube-api-access-tqws5\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.209956 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.209966 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.209976 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.209987 4813 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.590164 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" event={"ID":"ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742","Type":"ContainerDied","Data":"38172d23e443e5399ec5a4e34a61d3b1abd5f1e1e61424dbad2d5a5fe5c62138"} Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.590209 4813 scope.go:117] "RemoveContainer" containerID="b8724b5d7e5684c62837c463d4092d4df26104c55f3606d43afb97e56e15a887" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.590241 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-86bc75976-kcxmb" Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.616454 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-86bc75976-kcxmb"] Feb 17 09:05:20 crc kubenswrapper[4813]: I0217 09:05:20.621477 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-86bc75976-kcxmb"] Feb 17 09:05:21 crc kubenswrapper[4813]: I0217 09:05:21.123655 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742" path="/var/lib/kubelet/pods/ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742/volumes" Feb 17 09:05:21 crc kubenswrapper[4813]: I0217 09:05:21.739913 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:05:21 crc kubenswrapper[4813]: I0217 09:05:21.740485 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerName="ceilometer-central-agent" containerID="cri-o://4bfee4fa71785f5e76e8672dcd57a9b00ff7c1b0fa421af6f35f73b3d6be97da" gracePeriod=30 Feb 17 09:05:21 crc kubenswrapper[4813]: I0217 09:05:21.740615 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerName="proxy-httpd" containerID="cri-o://bc46323a7e821438efa76315688d337626ea33fdf1ac905aec789b8be1dee3d7" gracePeriod=30 Feb 17 09:05:21 crc kubenswrapper[4813]: I0217 09:05:21.740655 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerName="sg-core" containerID="cri-o://26dc119af732eaf13915da8dae289734e0e5ef31582b066984d10f993e81a22c" gracePeriod=30 Feb 17 09:05:21 crc kubenswrapper[4813]: I0217 09:05:21.740687 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerName="ceilometer-notification-agent" containerID="cri-o://0f7e719ce704389491cdd9811b028278994f0c44ae6b9bfab65d68de1f3e3cac" gracePeriod=30 Feb 17 09:05:22 crc kubenswrapper[4813]: I0217 09:05:22.623383 4813 generic.go:334] "Generic (PLEG): container finished" podID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerID="bc46323a7e821438efa76315688d337626ea33fdf1ac905aec789b8be1dee3d7" exitCode=0 Feb 17 09:05:22 crc kubenswrapper[4813]: I0217 09:05:22.623417 4813 generic.go:334] "Generic (PLEG): container finished" podID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerID="26dc119af732eaf13915da8dae289734e0e5ef31582b066984d10f993e81a22c" exitCode=2 Feb 17 09:05:22 crc kubenswrapper[4813]: I0217 09:05:22.623426 4813 generic.go:334] "Generic (PLEG): container finished" podID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerID="4bfee4fa71785f5e76e8672dcd57a9b00ff7c1b0fa421af6f35f73b3d6be97da" exitCode=0 Feb 17 09:05:22 crc kubenswrapper[4813]: I0217 09:05:22.623452 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f85ac76b-def9-4260-a408-eecd5c9a3760","Type":"ContainerDied","Data":"bc46323a7e821438efa76315688d337626ea33fdf1ac905aec789b8be1dee3d7"} Feb 17 09:05:22 crc kubenswrapper[4813]: I0217 09:05:22.623490 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f85ac76b-def9-4260-a408-eecd5c9a3760","Type":"ContainerDied","Data":"26dc119af732eaf13915da8dae289734e0e5ef31582b066984d10f993e81a22c"} Feb 17 09:05:22 crc kubenswrapper[4813]: I0217 09:05:22.623503 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f85ac76b-def9-4260-a408-eecd5c9a3760","Type":"ContainerDied","Data":"4bfee4fa71785f5e76e8672dcd57a9b00ff7c1b0fa421af6f35f73b3d6be97da"} Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.436772 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.468384 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-sg-core-conf-yaml\") pod \"f85ac76b-def9-4260-a408-eecd5c9a3760\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.468434 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-scripts\") pod \"f85ac76b-def9-4260-a408-eecd5c9a3760\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.468480 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-config-data\") pod \"f85ac76b-def9-4260-a408-eecd5c9a3760\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.468527 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjwtn\" (UniqueName: \"kubernetes.io/projected/f85ac76b-def9-4260-a408-eecd5c9a3760-kube-api-access-fjwtn\") pod \"f85ac76b-def9-4260-a408-eecd5c9a3760\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.468560 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-ceilometer-tls-certs\") pod \"f85ac76b-def9-4260-a408-eecd5c9a3760\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.468594 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85ac76b-def9-4260-a408-eecd5c9a3760-log-httpd\") pod \"f85ac76b-def9-4260-a408-eecd5c9a3760\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.468617 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-combined-ca-bundle\") pod \"f85ac76b-def9-4260-a408-eecd5c9a3760\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.468674 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85ac76b-def9-4260-a408-eecd5c9a3760-run-httpd\") pod \"f85ac76b-def9-4260-a408-eecd5c9a3760\" (UID: \"f85ac76b-def9-4260-a408-eecd5c9a3760\") " Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.469159 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f85ac76b-def9-4260-a408-eecd5c9a3760-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f85ac76b-def9-4260-a408-eecd5c9a3760" (UID: "f85ac76b-def9-4260-a408-eecd5c9a3760"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.469243 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f85ac76b-def9-4260-a408-eecd5c9a3760-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f85ac76b-def9-4260-a408-eecd5c9a3760" (UID: "f85ac76b-def9-4260-a408-eecd5c9a3760"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.477087 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85ac76b-def9-4260-a408-eecd5c9a3760-kube-api-access-fjwtn" (OuterVolumeSpecName: "kube-api-access-fjwtn") pod "f85ac76b-def9-4260-a408-eecd5c9a3760" (UID: "f85ac76b-def9-4260-a408-eecd5c9a3760"). InnerVolumeSpecName "kube-api-access-fjwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.477197 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-scripts" (OuterVolumeSpecName: "scripts") pod "f85ac76b-def9-4260-a408-eecd5c9a3760" (UID: "f85ac76b-def9-4260-a408-eecd5c9a3760"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.502899 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f85ac76b-def9-4260-a408-eecd5c9a3760" (UID: "f85ac76b-def9-4260-a408-eecd5c9a3760"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.534577 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f85ac76b-def9-4260-a408-eecd5c9a3760" (UID: "f85ac76b-def9-4260-a408-eecd5c9a3760"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.571055 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85ac76b-def9-4260-a408-eecd5c9a3760-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.571086 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.571096 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.571144 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjwtn\" (UniqueName: \"kubernetes.io/projected/f85ac76b-def9-4260-a408-eecd5c9a3760-kube-api-access-fjwtn\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.571153 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.571161 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f85ac76b-def9-4260-a408-eecd5c9a3760-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.592928 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f85ac76b-def9-4260-a408-eecd5c9a3760" (UID: "f85ac76b-def9-4260-a408-eecd5c9a3760"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.603730 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-config-data" (OuterVolumeSpecName: "config-data") pod "f85ac76b-def9-4260-a408-eecd5c9a3760" (UID: "f85ac76b-def9-4260-a408-eecd5c9a3760"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.631926 4813 generic.go:334] "Generic (PLEG): container finished" podID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerID="0f7e719ce704389491cdd9811b028278994f0c44ae6b9bfab65d68de1f3e3cac" exitCode=0 Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.631970 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f85ac76b-def9-4260-a408-eecd5c9a3760","Type":"ContainerDied","Data":"0f7e719ce704389491cdd9811b028278994f0c44ae6b9bfab65d68de1f3e3cac"} Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.631995 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f85ac76b-def9-4260-a408-eecd5c9a3760","Type":"ContainerDied","Data":"9903d3cbf502f51d9d64cd7e9c494829b32f185b9a0c77659bea937068110db0"} Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.632012 4813 scope.go:117] "RemoveContainer" containerID="bc46323a7e821438efa76315688d337626ea33fdf1ac905aec789b8be1dee3d7" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.632122 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.660712 4813 scope.go:117] "RemoveContainer" containerID="26dc119af732eaf13915da8dae289734e0e5ef31582b066984d10f993e81a22c" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.676636 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.676679 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85ac76b-def9-4260-a408-eecd5c9a3760-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.676712 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.689508 4813 scope.go:117] "RemoveContainer" containerID="0f7e719ce704389491cdd9811b028278994f0c44ae6b9bfab65d68de1f3e3cac" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.694169 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.708156 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:05:23 crc kubenswrapper[4813]: E0217 09:05:23.710079 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerName="sg-core" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.710118 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerName="sg-core" Feb 17 09:05:23 crc kubenswrapper[4813]: E0217 09:05:23.710136 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerName="proxy-httpd" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.710149 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerName="proxy-httpd" Feb 17 09:05:23 crc kubenswrapper[4813]: E0217 09:05:23.710172 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742" containerName="keystone-api" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.710183 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742" containerName="keystone-api" Feb 17 09:05:23 crc kubenswrapper[4813]: E0217 09:05:23.710200 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerName="ceilometer-notification-agent" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.710212 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerName="ceilometer-notification-agent" Feb 17 09:05:23 crc kubenswrapper[4813]: E0217 09:05:23.710242 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerName="ceilometer-central-agent" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.710252 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerName="ceilometer-central-agent" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.710538 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerName="sg-core" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.710575 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerName="proxy-httpd" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.710596 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerName="ceilometer-notification-agent" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.710640 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad4a78b2-30d4-4e2a-a5a0-c2a63c92f742" containerName="keystone-api" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.710659 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" containerName="ceilometer-central-agent" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.713369 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.717547 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.717760 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.717933 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.719680 4813 scope.go:117] "RemoveContainer" containerID="4bfee4fa71785f5e76e8672dcd57a9b00ff7c1b0fa421af6f35f73b3d6be97da" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.738537 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.749649 4813 scope.go:117] "RemoveContainer" containerID="bc46323a7e821438efa76315688d337626ea33fdf1ac905aec789b8be1dee3d7" Feb 17 09:05:23 crc kubenswrapper[4813]: E0217 09:05:23.749935 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc46323a7e821438efa76315688d337626ea33fdf1ac905aec789b8be1dee3d7\": container with ID starting with bc46323a7e821438efa76315688d337626ea33fdf1ac905aec789b8be1dee3d7 not found: ID does not exist" containerID="bc46323a7e821438efa76315688d337626ea33fdf1ac905aec789b8be1dee3d7" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.749971 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc46323a7e821438efa76315688d337626ea33fdf1ac905aec789b8be1dee3d7"} err="failed to get container status \"bc46323a7e821438efa76315688d337626ea33fdf1ac905aec789b8be1dee3d7\": rpc error: code = NotFound desc = could not find container \"bc46323a7e821438efa76315688d337626ea33fdf1ac905aec789b8be1dee3d7\": container with ID starting with bc46323a7e821438efa76315688d337626ea33fdf1ac905aec789b8be1dee3d7 not found: ID does not exist" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.750000 4813 scope.go:117] "RemoveContainer" containerID="26dc119af732eaf13915da8dae289734e0e5ef31582b066984d10f993e81a22c" Feb 17 09:05:23 crc kubenswrapper[4813]: E0217 09:05:23.750389 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26dc119af732eaf13915da8dae289734e0e5ef31582b066984d10f993e81a22c\": container with ID starting with 26dc119af732eaf13915da8dae289734e0e5ef31582b066984d10f993e81a22c not found: ID does not exist" containerID="26dc119af732eaf13915da8dae289734e0e5ef31582b066984d10f993e81a22c" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.750434 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26dc119af732eaf13915da8dae289734e0e5ef31582b066984d10f993e81a22c"} err="failed to get container status \"26dc119af732eaf13915da8dae289734e0e5ef31582b066984d10f993e81a22c\": rpc error: code = NotFound desc = could not find container \"26dc119af732eaf13915da8dae289734e0e5ef31582b066984d10f993e81a22c\": container with ID starting with 26dc119af732eaf13915da8dae289734e0e5ef31582b066984d10f993e81a22c not found: ID does not exist" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.750461 4813 scope.go:117] "RemoveContainer" containerID="0f7e719ce704389491cdd9811b028278994f0c44ae6b9bfab65d68de1f3e3cac" Feb 17 09:05:23 crc kubenswrapper[4813]: E0217 09:05:23.750843 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f7e719ce704389491cdd9811b028278994f0c44ae6b9bfab65d68de1f3e3cac\": container with ID starting with 0f7e719ce704389491cdd9811b028278994f0c44ae6b9bfab65d68de1f3e3cac not found: ID does not exist" containerID="0f7e719ce704389491cdd9811b028278994f0c44ae6b9bfab65d68de1f3e3cac" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.750872 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f7e719ce704389491cdd9811b028278994f0c44ae6b9bfab65d68de1f3e3cac"} err="failed to get container status \"0f7e719ce704389491cdd9811b028278994f0c44ae6b9bfab65d68de1f3e3cac\": rpc error: code = NotFound desc = could not find container \"0f7e719ce704389491cdd9811b028278994f0c44ae6b9bfab65d68de1f3e3cac\": container with ID starting with 0f7e719ce704389491cdd9811b028278994f0c44ae6b9bfab65d68de1f3e3cac not found: ID does not exist" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.750929 4813 scope.go:117] "RemoveContainer" containerID="4bfee4fa71785f5e76e8672dcd57a9b00ff7c1b0fa421af6f35f73b3d6be97da" Feb 17 09:05:23 crc kubenswrapper[4813]: E0217 09:05:23.751532 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bfee4fa71785f5e76e8672dcd57a9b00ff7c1b0fa421af6f35f73b3d6be97da\": container with ID starting with 4bfee4fa71785f5e76e8672dcd57a9b00ff7c1b0fa421af6f35f73b3d6be97da not found: ID does not exist" containerID="4bfee4fa71785f5e76e8672dcd57a9b00ff7c1b0fa421af6f35f73b3d6be97da" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.751567 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bfee4fa71785f5e76e8672dcd57a9b00ff7c1b0fa421af6f35f73b3d6be97da"} err="failed to get container status \"4bfee4fa71785f5e76e8672dcd57a9b00ff7c1b0fa421af6f35f73b3d6be97da\": rpc error: code = NotFound desc = could not find container \"4bfee4fa71785f5e76e8672dcd57a9b00ff7c1b0fa421af6f35f73b3d6be97da\": container with ID starting with 4bfee4fa71785f5e76e8672dcd57a9b00ff7c1b0fa421af6f35f73b3d6be97da not found: ID does not exist" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.883048 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-config-data\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.883126 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhpfw\" (UniqueName: \"kubernetes.io/projected/8394f8aa-6ef4-4397-ab32-a951eb0c8334-kube-api-access-xhpfw\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.883190 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8394f8aa-6ef4-4397-ab32-a951eb0c8334-run-httpd\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.883210 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-scripts\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.883243 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8394f8aa-6ef4-4397-ab32-a951eb0c8334-log-httpd\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.883270 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.883385 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.883457 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.985099 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhpfw\" (UniqueName: \"kubernetes.io/projected/8394f8aa-6ef4-4397-ab32-a951eb0c8334-kube-api-access-xhpfw\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.985238 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8394f8aa-6ef4-4397-ab32-a951eb0c8334-run-httpd\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.985278 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-scripts\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.985350 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8394f8aa-6ef4-4397-ab32-a951eb0c8334-log-httpd\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.985397 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.985458 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.985508 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.985584 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-config-data\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.985929 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8394f8aa-6ef4-4397-ab32-a951eb0c8334-run-httpd\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.985954 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8394f8aa-6ef4-4397-ab32-a951eb0c8334-log-httpd\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.989583 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.989916 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.989975 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-scripts\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.990791 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:23 crc kubenswrapper[4813]: I0217 09:05:23.991183 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-config-data\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:24 crc kubenswrapper[4813]: I0217 09:05:24.003013 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhpfw\" (UniqueName: \"kubernetes.io/projected/8394f8aa-6ef4-4397-ab32-a951eb0c8334-kube-api-access-xhpfw\") pod \"ceilometer-0\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:24 crc kubenswrapper[4813]: I0217 09:05:24.031264 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:24 crc kubenswrapper[4813]: I0217 09:05:24.551895 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:05:24 crc kubenswrapper[4813]: I0217 09:05:24.640533 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8394f8aa-6ef4-4397-ab32-a951eb0c8334","Type":"ContainerStarted","Data":"e7835959f4fdec987ae7fc13f070b6f05ebd6a7bc4edeed96294007b85e8aa04"} Feb 17 09:05:25 crc kubenswrapper[4813]: I0217 09:05:25.120107 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85ac76b-def9-4260-a408-eecd5c9a3760" path="/var/lib/kubelet/pods/f85ac76b-def9-4260-a408-eecd5c9a3760/volumes" Feb 17 09:05:25 crc kubenswrapper[4813]: I0217 09:05:25.651756 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8394f8aa-6ef4-4397-ab32-a951eb0c8334","Type":"ContainerStarted","Data":"5b7f2770d8a67eabeea1f7d09d4b85c065c6b026a0f0f32aee2da80f6cc7259c"} Feb 17 09:05:26 crc kubenswrapper[4813]: I0217 09:05:26.735732 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8394f8aa-6ef4-4397-ab32-a951eb0c8334","Type":"ContainerStarted","Data":"293ce66617fab8023bf86460314ea64245e962955522d9e9f4741b9b13b4fbc8"} Feb 17 09:05:27 crc kubenswrapper[4813]: I0217 09:05:27.749682 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8394f8aa-6ef4-4397-ab32-a951eb0c8334","Type":"ContainerStarted","Data":"c1b78017ecdf16cb502a0828b8cad72c709e0f4b098f1095ceee0b91b345e2d0"} Feb 17 09:05:28 crc kubenswrapper[4813]: I0217 09:05:28.763086 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8394f8aa-6ef4-4397-ab32-a951eb0c8334","Type":"ContainerStarted","Data":"2e50aed84f8afbc2c945954deda193babb04a21fef2963308eb8302e2e48d696"} Feb 17 09:05:28 crc kubenswrapper[4813]: I0217 09:05:28.763738 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:28 crc kubenswrapper[4813]: I0217 09:05:28.795765 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.426836548 podStartE2EDuration="5.795740516s" podCreationTimestamp="2026-02-17 09:05:23 +0000 UTC" firstStartedPulling="2026-02-17 09:05:24.56604921 +0000 UTC m=+1472.226810433" lastFinishedPulling="2026-02-17 09:05:27.934953168 +0000 UTC m=+1475.595714401" observedRunningTime="2026-02-17 09:05:28.789097325 +0000 UTC m=+1476.449858568" watchObservedRunningTime="2026-02-17 09:05:28.795740516 +0000 UTC m=+1476.456501759" Feb 17 09:05:35 crc kubenswrapper[4813]: I0217 09:05:35.165661 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 09:05:35 crc kubenswrapper[4813]: I0217 09:05:35.166135 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.199610 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb"] Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.208023 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-jtqlb"] Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.235703 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher94fc-account-delete-s7svx"] Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.237216 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher94fc-account-delete-s7svx" Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.244026 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.244246 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="2cab1458-f302-4222-8023-e01ed086c0d1" containerName="watcher-applier" containerID="cri-o://e96bf8bcb33f896323a001be7234bbc18e84e3f96ed50a67dde9ad383fad5b4a" gracePeriod=30 Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.261118 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher94fc-account-delete-s7svx"] Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.280773 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b31adbfc-0750-418d-bff4-8e4d4a729678-operator-scripts\") pod \"watcher94fc-account-delete-s7svx\" (UID: \"b31adbfc-0750-418d-bff4-8e4d4a729678\") " pod="watcher-kuttl-default/watcher94fc-account-delete-s7svx" Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.280934 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhbzq\" (UniqueName: \"kubernetes.io/projected/b31adbfc-0750-418d-bff4-8e4d4a729678-kube-api-access-hhbzq\") pod \"watcher94fc-account-delete-s7svx\" (UID: \"b31adbfc-0750-418d-bff4-8e4d4a729678\") " pod="watcher-kuttl-default/watcher94fc-account-delete-s7svx" Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.298748 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.298981 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="643617fd-2c29-4236-b1df-4ab576203b19" containerName="watcher-kuttl-api-log" containerID="cri-o://35f62af6bd7832b1dcce539d04b1f1726616a081be18b502c7ad5b4ce87bac8e" gracePeriod=30 Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.299066 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="643617fd-2c29-4236-b1df-4ab576203b19" containerName="watcher-api" containerID="cri-o://df390af95be1f9c3cf24c65f40149470c77063bedd87245f46928206502aa1e6" gracePeriod=30 Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.348767 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.348976 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="b06587ba-6195-4568-a151-63d8b78e2d68" containerName="watcher-decision-engine" containerID="cri-o://cc4c759da142322af14576e6e0e066f17c0388ab25372aee6ca9d8f98d1c6dec" gracePeriod=30 Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.381976 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b31adbfc-0750-418d-bff4-8e4d4a729678-operator-scripts\") pod \"watcher94fc-account-delete-s7svx\" (UID: \"b31adbfc-0750-418d-bff4-8e4d4a729678\") " pod="watcher-kuttl-default/watcher94fc-account-delete-s7svx" Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.382046 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhbzq\" (UniqueName: \"kubernetes.io/projected/b31adbfc-0750-418d-bff4-8e4d4a729678-kube-api-access-hhbzq\") pod \"watcher94fc-account-delete-s7svx\" (UID: \"b31adbfc-0750-418d-bff4-8e4d4a729678\") " pod="watcher-kuttl-default/watcher94fc-account-delete-s7svx" Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.382975 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b31adbfc-0750-418d-bff4-8e4d4a729678-operator-scripts\") pod \"watcher94fc-account-delete-s7svx\" (UID: \"b31adbfc-0750-418d-bff4-8e4d4a729678\") " pod="watcher-kuttl-default/watcher94fc-account-delete-s7svx" Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.400460 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhbzq\" (UniqueName: \"kubernetes.io/projected/b31adbfc-0750-418d-bff4-8e4d4a729678-kube-api-access-hhbzq\") pod \"watcher94fc-account-delete-s7svx\" (UID: \"b31adbfc-0750-418d-bff4-8e4d4a729678\") " pod="watcher-kuttl-default/watcher94fc-account-delete-s7svx" Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.554566 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher94fc-account-delete-s7svx" Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.933742 4813 generic.go:334] "Generic (PLEG): container finished" podID="643617fd-2c29-4236-b1df-4ab576203b19" containerID="35f62af6bd7832b1dcce539d04b1f1726616a081be18b502c7ad5b4ce87bac8e" exitCode=143 Feb 17 09:05:45 crc kubenswrapper[4813]: I0217 09:05:45.933903 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"643617fd-2c29-4236-b1df-4ab576203b19","Type":"ContainerDied","Data":"35f62af6bd7832b1dcce539d04b1f1726616a081be18b502c7ad5b4ce87bac8e"} Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.170575 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher94fc-account-delete-s7svx"] Feb 17 09:05:46 crc kubenswrapper[4813]: W0217 09:05:46.181447 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb31adbfc_0750_418d_bff4_8e4d4a729678.slice/crio-7b859fc6a4d5f73139a06f9304a2f003c314f916c25b1a249b0e7ea03adfb8db WatchSource:0}: Error finding container 7b859fc6a4d5f73139a06f9304a2f003c314f916c25b1a249b0e7ea03adfb8db: Status 404 returned error can't find the container with id 7b859fc6a4d5f73139a06f9304a2f003c314f916c25b1a249b0e7ea03adfb8db Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.597589 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.701755 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/643617fd-2c29-4236-b1df-4ab576203b19-logs\") pod \"643617fd-2c29-4236-b1df-4ab576203b19\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.701843 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-custom-prometheus-ca\") pod \"643617fd-2c29-4236-b1df-4ab576203b19\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.701894 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-config-data\") pod \"643617fd-2c29-4236-b1df-4ab576203b19\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.701930 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2j82\" (UniqueName: \"kubernetes.io/projected/643617fd-2c29-4236-b1df-4ab576203b19-kube-api-access-h2j82\") pod \"643617fd-2c29-4236-b1df-4ab576203b19\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.701991 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-cert-memcached-mtls\") pod \"643617fd-2c29-4236-b1df-4ab576203b19\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.702044 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-combined-ca-bundle\") pod \"643617fd-2c29-4236-b1df-4ab576203b19\" (UID: \"643617fd-2c29-4236-b1df-4ab576203b19\") " Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.704252 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/643617fd-2c29-4236-b1df-4ab576203b19-logs" (OuterVolumeSpecName: "logs") pod "643617fd-2c29-4236-b1df-4ab576203b19" (UID: "643617fd-2c29-4236-b1df-4ab576203b19"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.708419 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/643617fd-2c29-4236-b1df-4ab576203b19-kube-api-access-h2j82" (OuterVolumeSpecName: "kube-api-access-h2j82") pod "643617fd-2c29-4236-b1df-4ab576203b19" (UID: "643617fd-2c29-4236-b1df-4ab576203b19"). InnerVolumeSpecName "kube-api-access-h2j82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.763873 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-config-data" (OuterVolumeSpecName: "config-data") pod "643617fd-2c29-4236-b1df-4ab576203b19" (UID: "643617fd-2c29-4236-b1df-4ab576203b19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.772110 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "643617fd-2c29-4236-b1df-4ab576203b19" (UID: "643617fd-2c29-4236-b1df-4ab576203b19"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.775840 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "643617fd-2c29-4236-b1df-4ab576203b19" (UID: "643617fd-2c29-4236-b1df-4ab576203b19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.795195 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "643617fd-2c29-4236-b1df-4ab576203b19" (UID: "643617fd-2c29-4236-b1df-4ab576203b19"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.803942 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.803974 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2j82\" (UniqueName: \"kubernetes.io/projected/643617fd-2c29-4236-b1df-4ab576203b19-kube-api-access-h2j82\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.803986 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.803995 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.804004 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/643617fd-2c29-4236-b1df-4ab576203b19-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.804012 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/643617fd-2c29-4236-b1df-4ab576203b19-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.958198 4813 generic.go:334] "Generic (PLEG): container finished" podID="b31adbfc-0750-418d-bff4-8e4d4a729678" containerID="70b8d0872583601cec11540c208de75aec01fd00faacce13922c63d599db98ce" exitCode=0 Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.958260 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher94fc-account-delete-s7svx" event={"ID":"b31adbfc-0750-418d-bff4-8e4d4a729678","Type":"ContainerDied","Data":"70b8d0872583601cec11540c208de75aec01fd00faacce13922c63d599db98ce"} Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.958347 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher94fc-account-delete-s7svx" event={"ID":"b31adbfc-0750-418d-bff4-8e4d4a729678","Type":"ContainerStarted","Data":"7b859fc6a4d5f73139a06f9304a2f003c314f916c25b1a249b0e7ea03adfb8db"} Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.963588 4813 generic.go:334] "Generic (PLEG): container finished" podID="643617fd-2c29-4236-b1df-4ab576203b19" containerID="df390af95be1f9c3cf24c65f40149470c77063bedd87245f46928206502aa1e6" exitCode=0 Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.963638 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"643617fd-2c29-4236-b1df-4ab576203b19","Type":"ContainerDied","Data":"df390af95be1f9c3cf24c65f40149470c77063bedd87245f46928206502aa1e6"} Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.963670 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"643617fd-2c29-4236-b1df-4ab576203b19","Type":"ContainerDied","Data":"2c238b017d76eca66cff5894e3452fd43ab26a5ec8a56cbf767571c863d8cabb"} Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.963698 4813 scope.go:117] "RemoveContainer" containerID="df390af95be1f9c3cf24c65f40149470c77063bedd87245f46928206502aa1e6" Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.963885 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:05:46 crc kubenswrapper[4813]: I0217 09:05:46.992834 4813 scope.go:117] "RemoveContainer" containerID="35f62af6bd7832b1dcce539d04b1f1726616a081be18b502c7ad5b4ce87bac8e" Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.018529 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.022581 4813 scope.go:117] "RemoveContainer" containerID="df390af95be1f9c3cf24c65f40149470c77063bedd87245f46928206502aa1e6" Feb 17 09:05:47 crc kubenswrapper[4813]: E0217 09:05:47.023529 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df390af95be1f9c3cf24c65f40149470c77063bedd87245f46928206502aa1e6\": container with ID starting with df390af95be1f9c3cf24c65f40149470c77063bedd87245f46928206502aa1e6 not found: ID does not exist" containerID="df390af95be1f9c3cf24c65f40149470c77063bedd87245f46928206502aa1e6" Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.023592 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df390af95be1f9c3cf24c65f40149470c77063bedd87245f46928206502aa1e6"} err="failed to get container status \"df390af95be1f9c3cf24c65f40149470c77063bedd87245f46928206502aa1e6\": rpc error: code = NotFound desc = could not find container \"df390af95be1f9c3cf24c65f40149470c77063bedd87245f46928206502aa1e6\": container with ID starting with df390af95be1f9c3cf24c65f40149470c77063bedd87245f46928206502aa1e6 not found: ID does not exist" Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.023626 4813 scope.go:117] "RemoveContainer" containerID="35f62af6bd7832b1dcce539d04b1f1726616a081be18b502c7ad5b4ce87bac8e" Feb 17 09:05:47 crc kubenswrapper[4813]: E0217 09:05:47.024128 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f62af6bd7832b1dcce539d04b1f1726616a081be18b502c7ad5b4ce87bac8e\": container with ID starting with 35f62af6bd7832b1dcce539d04b1f1726616a081be18b502c7ad5b4ce87bac8e not found: ID does not exist" containerID="35f62af6bd7832b1dcce539d04b1f1726616a081be18b502c7ad5b4ce87bac8e" Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.024178 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f62af6bd7832b1dcce539d04b1f1726616a081be18b502c7ad5b4ce87bac8e"} err="failed to get container status \"35f62af6bd7832b1dcce539d04b1f1726616a081be18b502c7ad5b4ce87bac8e\": rpc error: code = NotFound desc = could not find container \"35f62af6bd7832b1dcce539d04b1f1726616a081be18b502c7ad5b4ce87bac8e\": container with ID starting with 35f62af6bd7832b1dcce539d04b1f1726616a081be18b502c7ad5b4ce87bac8e not found: ID does not exist" Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.027926 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.122756 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c21930c-0101-46c6-82ff-08fcae8ecb02" path="/var/lib/kubelet/pods/1c21930c-0101-46c6-82ff-08fcae8ecb02/volumes" Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.123564 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="643617fd-2c29-4236-b1df-4ab576203b19" path="/var/lib/kubelet/pods/643617fd-2c29-4236-b1df-4ab576203b19/volumes" Feb 17 09:05:47 crc kubenswrapper[4813]: E0217 09:05:47.128645 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e96bf8bcb33f896323a001be7234bbc18e84e3f96ed50a67dde9ad383fad5b4a" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:05:47 crc kubenswrapper[4813]: E0217 09:05:47.131409 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e96bf8bcb33f896323a001be7234bbc18e84e3f96ed50a67dde9ad383fad5b4a" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:05:47 crc kubenswrapper[4813]: E0217 09:05:47.136967 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e96bf8bcb33f896323a001be7234bbc18e84e3f96ed50a67dde9ad383fad5b4a" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:05:47 crc kubenswrapper[4813]: E0217 09:05:47.137055 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="2cab1458-f302-4222-8023-e01ed086c0d1" containerName="watcher-applier" Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.700478 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.700773 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="ceilometer-central-agent" containerID="cri-o://5b7f2770d8a67eabeea1f7d09d4b85c065c6b026a0f0f32aee2da80f6cc7259c" gracePeriod=30 Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.700824 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="ceilometer-notification-agent" containerID="cri-o://293ce66617fab8023bf86460314ea64245e962955522d9e9f4741b9b13b4fbc8" gracePeriod=30 Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.700870 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="sg-core" containerID="cri-o://c1b78017ecdf16cb502a0828b8cad72c709e0f4b098f1095ceee0b91b345e2d0" gracePeriod=30 Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.700906 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="proxy-httpd" containerID="cri-o://2e50aed84f8afbc2c945954deda193babb04a21fef2963308eb8302e2e48d696" gracePeriod=30 Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.730625 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.971978 4813 generic.go:334] "Generic (PLEG): container finished" podID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerID="2e50aed84f8afbc2c945954deda193babb04a21fef2963308eb8302e2e48d696" exitCode=0 Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.972007 4813 generic.go:334] "Generic (PLEG): container finished" podID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerID="c1b78017ecdf16cb502a0828b8cad72c709e0f4b098f1095ceee0b91b345e2d0" exitCode=2 Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.972061 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8394f8aa-6ef4-4397-ab32-a951eb0c8334","Type":"ContainerDied","Data":"2e50aed84f8afbc2c945954deda193babb04a21fef2963308eb8302e2e48d696"} Feb 17 09:05:47 crc kubenswrapper[4813]: I0217 09:05:47.972094 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8394f8aa-6ef4-4397-ab32-a951eb0c8334","Type":"ContainerDied","Data":"c1b78017ecdf16cb502a0828b8cad72c709e0f4b098f1095ceee0b91b345e2d0"} Feb 17 09:05:48 crc kubenswrapper[4813]: I0217 09:05:48.312820 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher94fc-account-delete-s7svx" Feb 17 09:05:48 crc kubenswrapper[4813]: I0217 09:05:48.429536 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhbzq\" (UniqueName: \"kubernetes.io/projected/b31adbfc-0750-418d-bff4-8e4d4a729678-kube-api-access-hhbzq\") pod \"b31adbfc-0750-418d-bff4-8e4d4a729678\" (UID: \"b31adbfc-0750-418d-bff4-8e4d4a729678\") " Feb 17 09:05:48 crc kubenswrapper[4813]: I0217 09:05:48.429837 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b31adbfc-0750-418d-bff4-8e4d4a729678-operator-scripts\") pod \"b31adbfc-0750-418d-bff4-8e4d4a729678\" (UID: \"b31adbfc-0750-418d-bff4-8e4d4a729678\") " Feb 17 09:05:48 crc kubenswrapper[4813]: I0217 09:05:48.430527 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b31adbfc-0750-418d-bff4-8e4d4a729678-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b31adbfc-0750-418d-bff4-8e4d4a729678" (UID: "b31adbfc-0750-418d-bff4-8e4d4a729678"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:05:48 crc kubenswrapper[4813]: I0217 09:05:48.434861 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b31adbfc-0750-418d-bff4-8e4d4a729678-kube-api-access-hhbzq" (OuterVolumeSpecName: "kube-api-access-hhbzq") pod "b31adbfc-0750-418d-bff4-8e4d4a729678" (UID: "b31adbfc-0750-418d-bff4-8e4d4a729678"). InnerVolumeSpecName "kube-api-access-hhbzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:05:48 crc kubenswrapper[4813]: I0217 09:05:48.532943 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhbzq\" (UniqueName: \"kubernetes.io/projected/b31adbfc-0750-418d-bff4-8e4d4a729678-kube-api-access-hhbzq\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:48 crc kubenswrapper[4813]: I0217 09:05:48.532983 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b31adbfc-0750-418d-bff4-8e4d4a729678-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:48 crc kubenswrapper[4813]: I0217 09:05:48.982263 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher94fc-account-delete-s7svx" event={"ID":"b31adbfc-0750-418d-bff4-8e4d4a729678","Type":"ContainerDied","Data":"7b859fc6a4d5f73139a06f9304a2f003c314f916c25b1a249b0e7ea03adfb8db"} Feb 17 09:05:48 crc kubenswrapper[4813]: I0217 09:05:48.982320 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b859fc6a4d5f73139a06f9304a2f003c314f916c25b1a249b0e7ea03adfb8db" Feb 17 09:05:48 crc kubenswrapper[4813]: I0217 09:05:48.982343 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher94fc-account-delete-s7svx" Feb 17 09:05:48 crc kubenswrapper[4813]: I0217 09:05:48.985117 4813 generic.go:334] "Generic (PLEG): container finished" podID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerID="5b7f2770d8a67eabeea1f7d09d4b85c065c6b026a0f0f32aee2da80f6cc7259c" exitCode=0 Feb 17 09:05:48 crc kubenswrapper[4813]: I0217 09:05:48.985161 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8394f8aa-6ef4-4397-ab32-a951eb0c8334","Type":"ContainerDied","Data":"5b7f2770d8a67eabeea1f7d09d4b85c065c6b026a0f0f32aee2da80f6cc7259c"} Feb 17 09:05:50 crc kubenswrapper[4813]: I0217 09:05:50.285921 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-9p5qz"] Feb 17 09:05:50 crc kubenswrapper[4813]: I0217 09:05:50.297663 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-9p5qz"] Feb 17 09:05:50 crc kubenswrapper[4813]: I0217 09:05:50.315288 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w"] Feb 17 09:05:50 crc kubenswrapper[4813]: I0217 09:05:50.321576 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-94fc-account-create-update-m2f7w"] Feb 17 09:05:50 crc kubenswrapper[4813]: I0217 09:05:50.327645 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher94fc-account-delete-s7svx"] Feb 17 09:05:50 crc kubenswrapper[4813]: I0217 09:05:50.333760 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher94fc-account-delete-s7svx"] Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.007732 4813 generic.go:334] "Generic (PLEG): container finished" podID="2cab1458-f302-4222-8023-e01ed086c0d1" containerID="e96bf8bcb33f896323a001be7234bbc18e84e3f96ed50a67dde9ad383fad5b4a" exitCode=0 Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.008030 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2cab1458-f302-4222-8023-e01ed086c0d1","Type":"ContainerDied","Data":"e96bf8bcb33f896323a001be7234bbc18e84e3f96ed50a67dde9ad383fad5b4a"} Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.122095 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="373f7522-5889-457c-a4fd-611eeb468cf6" path="/var/lib/kubelet/pods/373f7522-5889-457c-a4fd-611eeb468cf6/volumes" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.122677 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b31adbfc-0750-418d-bff4-8e4d4a729678" path="/var/lib/kubelet/pods/b31adbfc-0750-418d-bff4-8e4d4a729678/volumes" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.123123 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c09576b1-96ef-453c-bea5-c6b76a69e4aa" path="/var/lib/kubelet/pods/c09576b1-96ef-453c-bea5-c6b76a69e4aa/volumes" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.186847 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.376277 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-config-data\") pod \"2cab1458-f302-4222-8023-e01ed086c0d1\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.376346 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-cert-memcached-mtls\") pod \"2cab1458-f302-4222-8023-e01ed086c0d1\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.376578 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-combined-ca-bundle\") pod \"2cab1458-f302-4222-8023-e01ed086c0d1\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.376636 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cab1458-f302-4222-8023-e01ed086c0d1-logs\") pod \"2cab1458-f302-4222-8023-e01ed086c0d1\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.376658 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w8zh\" (UniqueName: \"kubernetes.io/projected/2cab1458-f302-4222-8023-e01ed086c0d1-kube-api-access-8w8zh\") pod \"2cab1458-f302-4222-8023-e01ed086c0d1\" (UID: \"2cab1458-f302-4222-8023-e01ed086c0d1\") " Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.379282 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cab1458-f302-4222-8023-e01ed086c0d1-logs" (OuterVolumeSpecName: "logs") pod "2cab1458-f302-4222-8023-e01ed086c0d1" (UID: "2cab1458-f302-4222-8023-e01ed086c0d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.387919 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cab1458-f302-4222-8023-e01ed086c0d1-kube-api-access-8w8zh" (OuterVolumeSpecName: "kube-api-access-8w8zh") pod "2cab1458-f302-4222-8023-e01ed086c0d1" (UID: "2cab1458-f302-4222-8023-e01ed086c0d1"). InnerVolumeSpecName "kube-api-access-8w8zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.411547 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cab1458-f302-4222-8023-e01ed086c0d1" (UID: "2cab1458-f302-4222-8023-e01ed086c0d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.430833 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-config-data" (OuterVolumeSpecName: "config-data") pod "2cab1458-f302-4222-8023-e01ed086c0d1" (UID: "2cab1458-f302-4222-8023-e01ed086c0d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.448494 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "2cab1458-f302-4222-8023-e01ed086c0d1" (UID: "2cab1458-f302-4222-8023-e01ed086c0d1"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.479949 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.479989 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cab1458-f302-4222-8023-e01ed086c0d1-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.480003 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w8zh\" (UniqueName: \"kubernetes.io/projected/2cab1458-f302-4222-8023-e01ed086c0d1-kube-api-access-8w8zh\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.480018 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.480032 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2cab1458-f302-4222-8023-e01ed086c0d1-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.729819 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.902979 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-combined-ca-bundle\") pod \"b06587ba-6195-4568-a151-63d8b78e2d68\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.903470 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-config-data\") pod \"b06587ba-6195-4568-a151-63d8b78e2d68\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.903564 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-cert-memcached-mtls\") pod \"b06587ba-6195-4568-a151-63d8b78e2d68\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.903652 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-custom-prometheus-ca\") pod \"b06587ba-6195-4568-a151-63d8b78e2d68\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.903690 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6h9x\" (UniqueName: \"kubernetes.io/projected/b06587ba-6195-4568-a151-63d8b78e2d68-kube-api-access-x6h9x\") pod \"b06587ba-6195-4568-a151-63d8b78e2d68\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.903764 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b06587ba-6195-4568-a151-63d8b78e2d68-logs\") pod \"b06587ba-6195-4568-a151-63d8b78e2d68\" (UID: \"b06587ba-6195-4568-a151-63d8b78e2d68\") " Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.904513 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b06587ba-6195-4568-a151-63d8b78e2d68-logs" (OuterVolumeSpecName: "logs") pod "b06587ba-6195-4568-a151-63d8b78e2d68" (UID: "b06587ba-6195-4568-a151-63d8b78e2d68"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.908368 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b06587ba-6195-4568-a151-63d8b78e2d68-kube-api-access-x6h9x" (OuterVolumeSpecName: "kube-api-access-x6h9x") pod "b06587ba-6195-4568-a151-63d8b78e2d68" (UID: "b06587ba-6195-4568-a151-63d8b78e2d68"). InnerVolumeSpecName "kube-api-access-x6h9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.932828 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "b06587ba-6195-4568-a151-63d8b78e2d68" (UID: "b06587ba-6195-4568-a151-63d8b78e2d68"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.938818 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b06587ba-6195-4568-a151-63d8b78e2d68" (UID: "b06587ba-6195-4568-a151-63d8b78e2d68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.950637 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-config-data" (OuterVolumeSpecName: "config-data") pod "b06587ba-6195-4568-a151-63d8b78e2d68" (UID: "b06587ba-6195-4568-a151-63d8b78e2d68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:51 crc kubenswrapper[4813]: I0217 09:05:51.973846 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "b06587ba-6195-4568-a151-63d8b78e2d68" (UID: "b06587ba-6195-4568-a151-63d8b78e2d68"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.006752 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.006798 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.006813 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.006826 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6h9x\" (UniqueName: \"kubernetes.io/projected/b06587ba-6195-4568-a151-63d8b78e2d68-kube-api-access-x6h9x\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.006839 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b06587ba-6195-4568-a151-63d8b78e2d68-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.006852 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06587ba-6195-4568-a151-63d8b78e2d68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.019872 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2cab1458-f302-4222-8023-e01ed086c0d1","Type":"ContainerDied","Data":"c8fb25dd5de8997dad7fe973c364b912e0c234126a542667e64b048884bdab86"} Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.019963 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.019923 4813 scope.go:117] "RemoveContainer" containerID="e96bf8bcb33f896323a001be7234bbc18e84e3f96ed50a67dde9ad383fad5b4a" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.027633 4813 generic.go:334] "Generic (PLEG): container finished" podID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerID="293ce66617fab8023bf86460314ea64245e962955522d9e9f4741b9b13b4fbc8" exitCode=0 Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.027710 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8394f8aa-6ef4-4397-ab32-a951eb0c8334","Type":"ContainerDied","Data":"293ce66617fab8023bf86460314ea64245e962955522d9e9f4741b9b13b4fbc8"} Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.029758 4813 generic.go:334] "Generic (PLEG): container finished" podID="b06587ba-6195-4568-a151-63d8b78e2d68" containerID="cc4c759da142322af14576e6e0e066f17c0388ab25372aee6ca9d8f98d1c6dec" exitCode=0 Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.029787 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"b06587ba-6195-4568-a151-63d8b78e2d68","Type":"ContainerDied","Data":"cc4c759da142322af14576e6e0e066f17c0388ab25372aee6ca9d8f98d1c6dec"} Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.029807 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"b06587ba-6195-4568-a151-63d8b78e2d68","Type":"ContainerDied","Data":"afefa61171a454dc5a2ffee31ab07228773aea9d36fea2a8876b1b7af419b030"} Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.029860 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.052499 4813 scope.go:117] "RemoveContainer" containerID="cc4c759da142322af14576e6e0e066f17c0388ab25372aee6ca9d8f98d1c6dec" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.085745 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.097833 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.104338 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.105695 4813 scope.go:117] "RemoveContainer" containerID="cc4c759da142322af14576e6e0e066f17c0388ab25372aee6ca9d8f98d1c6dec" Feb 17 09:05:52 crc kubenswrapper[4813]: E0217 09:05:52.106069 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc4c759da142322af14576e6e0e066f17c0388ab25372aee6ca9d8f98d1c6dec\": container with ID starting with cc4c759da142322af14576e6e0e066f17c0388ab25372aee6ca9d8f98d1c6dec not found: ID does not exist" containerID="cc4c759da142322af14576e6e0e066f17c0388ab25372aee6ca9d8f98d1c6dec" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.106147 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc4c759da142322af14576e6e0e066f17c0388ab25372aee6ca9d8f98d1c6dec"} err="failed to get container status \"cc4c759da142322af14576e6e0e066f17c0388ab25372aee6ca9d8f98d1c6dec\": rpc error: code = NotFound desc = could not find container \"cc4c759da142322af14576e6e0e066f17c0388ab25372aee6ca9d8f98d1c6dec\": container with ID starting with cc4c759da142322af14576e6e0e066f17c0388ab25372aee6ca9d8f98d1c6dec not found: ID does not exist" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.111425 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.163630 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.209266 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8394f8aa-6ef4-4397-ab32-a951eb0c8334-run-httpd\") pod \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.210352 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8394f8aa-6ef4-4397-ab32-a951eb0c8334-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8394f8aa-6ef4-4397-ab32-a951eb0c8334" (UID: "8394f8aa-6ef4-4397-ab32-a951eb0c8334"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.310621 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-ceilometer-tls-certs\") pod \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.310686 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-config-data\") pod \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.310714 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhpfw\" (UniqueName: \"kubernetes.io/projected/8394f8aa-6ef4-4397-ab32-a951eb0c8334-kube-api-access-xhpfw\") pod \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.310750 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8394f8aa-6ef4-4397-ab32-a951eb0c8334-log-httpd\") pod \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.310802 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-sg-core-conf-yaml\") pod \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.310819 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-scripts\") pod \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.310845 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-combined-ca-bundle\") pod \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\" (UID: \"8394f8aa-6ef4-4397-ab32-a951eb0c8334\") " Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.311144 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8394f8aa-6ef4-4397-ab32-a951eb0c8334-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.311380 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8394f8aa-6ef4-4397-ab32-a951eb0c8334-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8394f8aa-6ef4-4397-ab32-a951eb0c8334" (UID: "8394f8aa-6ef4-4397-ab32-a951eb0c8334"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.314121 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8394f8aa-6ef4-4397-ab32-a951eb0c8334-kube-api-access-xhpfw" (OuterVolumeSpecName: "kube-api-access-xhpfw") pod "8394f8aa-6ef4-4397-ab32-a951eb0c8334" (UID: "8394f8aa-6ef4-4397-ab32-a951eb0c8334"). InnerVolumeSpecName "kube-api-access-xhpfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.314770 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-scripts" (OuterVolumeSpecName: "scripts") pod "8394f8aa-6ef4-4397-ab32-a951eb0c8334" (UID: "8394f8aa-6ef4-4397-ab32-a951eb0c8334"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.353623 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8394f8aa-6ef4-4397-ab32-a951eb0c8334" (UID: "8394f8aa-6ef4-4397-ab32-a951eb0c8334"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.369374 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8394f8aa-6ef4-4397-ab32-a951eb0c8334" (UID: "8394f8aa-6ef4-4397-ab32-a951eb0c8334"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.379410 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8394f8aa-6ef4-4397-ab32-a951eb0c8334" (UID: "8394f8aa-6ef4-4397-ab32-a951eb0c8334"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.413136 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.413169 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhpfw\" (UniqueName: \"kubernetes.io/projected/8394f8aa-6ef4-4397-ab32-a951eb0c8334-kube-api-access-xhpfw\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.413181 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8394f8aa-6ef4-4397-ab32-a951eb0c8334-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.413190 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.413199 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.413207 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.416251 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-config-data" (OuterVolumeSpecName: "config-data") pod "8394f8aa-6ef4-4397-ab32-a951eb0c8334" (UID: "8394f8aa-6ef4-4397-ab32-a951eb0c8334"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:05:52 crc kubenswrapper[4813]: I0217 09:05:52.514898 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8394f8aa-6ef4-4397-ab32-a951eb0c8334-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.044892 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8394f8aa-6ef4-4397-ab32-a951eb0c8334","Type":"ContainerDied","Data":"e7835959f4fdec987ae7fc13f070b6f05ebd6a7bc4edeed96294007b85e8aa04"} Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.044979 4813 scope.go:117] "RemoveContainer" containerID="2e50aed84f8afbc2c945954deda193babb04a21fef2963308eb8302e2e48d696" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.045043 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.094942 4813 scope.go:117] "RemoveContainer" containerID="c1b78017ecdf16cb502a0828b8cad72c709e0f4b098f1095ceee0b91b345e2d0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.172828 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cab1458-f302-4222-8023-e01ed086c0d1" path="/var/lib/kubelet/pods/2cab1458-f302-4222-8023-e01ed086c0d1/volumes" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.173997 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b06587ba-6195-4568-a151-63d8b78e2d68" path="/var/lib/kubelet/pods/b06587ba-6195-4568-a151-63d8b78e2d68/volumes" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.174641 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.174674 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.177397 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:05:53 crc kubenswrapper[4813]: E0217 09:05:53.177913 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="proxy-httpd" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.177935 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="proxy-httpd" Feb 17 09:05:53 crc kubenswrapper[4813]: E0217 09:05:53.177947 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cab1458-f302-4222-8023-e01ed086c0d1" containerName="watcher-applier" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.177957 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cab1458-f302-4222-8023-e01ed086c0d1" containerName="watcher-applier" Feb 17 09:05:53 crc kubenswrapper[4813]: E0217 09:05:53.178139 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643617fd-2c29-4236-b1df-4ab576203b19" containerName="watcher-kuttl-api-log" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.178158 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="643617fd-2c29-4236-b1df-4ab576203b19" containerName="watcher-kuttl-api-log" Feb 17 09:05:53 crc kubenswrapper[4813]: E0217 09:05:53.178174 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="ceilometer-notification-agent" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.178182 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="ceilometer-notification-agent" Feb 17 09:05:53 crc kubenswrapper[4813]: E0217 09:05:53.178201 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="643617fd-2c29-4236-b1df-4ab576203b19" containerName="watcher-api" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.178208 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="643617fd-2c29-4236-b1df-4ab576203b19" containerName="watcher-api" Feb 17 09:05:53 crc kubenswrapper[4813]: E0217 09:05:53.178223 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="sg-core" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.178230 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="sg-core" Feb 17 09:05:53 crc kubenswrapper[4813]: E0217 09:05:53.178242 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="ceilometer-central-agent" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.178250 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="ceilometer-central-agent" Feb 17 09:05:53 crc kubenswrapper[4813]: E0217 09:05:53.178264 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b06587ba-6195-4568-a151-63d8b78e2d68" containerName="watcher-decision-engine" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.178273 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b06587ba-6195-4568-a151-63d8b78e2d68" containerName="watcher-decision-engine" Feb 17 09:05:53 crc kubenswrapper[4813]: E0217 09:05:53.178285 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b31adbfc-0750-418d-bff4-8e4d4a729678" containerName="mariadb-account-delete" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.178293 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b31adbfc-0750-418d-bff4-8e4d4a729678" containerName="mariadb-account-delete" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.178494 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="ceilometer-central-agent" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.178510 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="sg-core" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.178524 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="643617fd-2c29-4236-b1df-4ab576203b19" containerName="watcher-api" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.178537 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="ceilometer-notification-agent" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.178548 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b06587ba-6195-4568-a151-63d8b78e2d68" containerName="watcher-decision-engine" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.178560 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cab1458-f302-4222-8023-e01ed086c0d1" containerName="watcher-applier" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.178572 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b31adbfc-0750-418d-bff4-8e4d4a729678" containerName="mariadb-account-delete" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.178589 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" containerName="proxy-httpd" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.178602 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="643617fd-2c29-4236-b1df-4ab576203b19" containerName="watcher-kuttl-api-log" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.180506 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.184221 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.184458 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.185196 4813 scope.go:117] "RemoveContainer" containerID="293ce66617fab8023bf86460314ea64245e962955522d9e9f4741b9b13b4fbc8" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.186734 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.192966 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.215088 4813 scope.go:117] "RemoveContainer" containerID="5b7f2770d8a67eabeea1f7d09d4b85c065c6b026a0f0f32aee2da80f6cc7259c" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.262775 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7cf6603-0290-498b-9746-2e6bf08cd6a6-log-httpd\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.262838 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-scripts\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.262867 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7cf6603-0290-498b-9746-2e6bf08cd6a6-run-httpd\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.262884 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlvfr\" (UniqueName: \"kubernetes.io/projected/c7cf6603-0290-498b-9746-2e6bf08cd6a6-kube-api-access-rlvfr\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.262955 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.262975 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-config-data\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.263000 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.263035 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.364766 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.364822 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-config-data\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.364849 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.364891 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.364930 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7cf6603-0290-498b-9746-2e6bf08cd6a6-log-httpd\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.364974 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-scripts\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.365003 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7cf6603-0290-498b-9746-2e6bf08cd6a6-run-httpd\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.365022 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlvfr\" (UniqueName: \"kubernetes.io/projected/c7cf6603-0290-498b-9746-2e6bf08cd6a6-kube-api-access-rlvfr\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.368074 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7cf6603-0290-498b-9746-2e6bf08cd6a6-log-httpd\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.368778 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7cf6603-0290-498b-9746-2e6bf08cd6a6-run-httpd\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.371934 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-config-data\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.373166 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.377563 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-scripts\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.379287 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.389962 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.395497 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlvfr\" (UniqueName: \"kubernetes.io/projected/c7cf6603-0290-498b-9746-2e6bf08cd6a6-kube-api-access-rlvfr\") pod \"ceilometer-0\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.508580 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:53 crc kubenswrapper[4813]: I0217 09:05:53.949979 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.057935 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c7cf6603-0290-498b-9746-2e6bf08cd6a6","Type":"ContainerStarted","Data":"9b89a029aec897dc2dd37fb33e709dd5d42e9659c815ef58e633641d02370f2b"} Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.522248 4813 scope.go:117] "RemoveContainer" containerID="91ab1f88022585747f5adf3ced1f2693c996a0b906e7fb8cd3b6ee12b86e90b6" Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.595669 4813 scope.go:117] "RemoveContainer" containerID="74c3931972a17dcfc60833eee3a0251f98e10b8c212d09d35aa97bcc0c193273" Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.712540 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-zcm7m"] Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.713551 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-zcm7m" Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.718175 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-8e65-account-create-update-8btw6"] Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.719214 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8e65-account-create-update-8btw6" Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.721123 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.734922 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-zcm7m"] Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.748216 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-8e65-account-create-update-8btw6"] Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.889553 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9db8g\" (UniqueName: \"kubernetes.io/projected/df1c85a8-48d9-4697-810a-07f11e4fd052-kube-api-access-9db8g\") pod \"watcher-8e65-account-create-update-8btw6\" (UID: \"df1c85a8-48d9-4697-810a-07f11e4fd052\") " pod="watcher-kuttl-default/watcher-8e65-account-create-update-8btw6" Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.889619 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlbt6\" (UniqueName: \"kubernetes.io/projected/3637c9fb-4fbe-4b09-89ca-e6b78e90fa03-kube-api-access-qlbt6\") pod \"watcher-db-create-zcm7m\" (UID: \"3637c9fb-4fbe-4b09-89ca-e6b78e90fa03\") " pod="watcher-kuttl-default/watcher-db-create-zcm7m" Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.889672 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3637c9fb-4fbe-4b09-89ca-e6b78e90fa03-operator-scripts\") pod \"watcher-db-create-zcm7m\" (UID: \"3637c9fb-4fbe-4b09-89ca-e6b78e90fa03\") " pod="watcher-kuttl-default/watcher-db-create-zcm7m" Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.889714 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df1c85a8-48d9-4697-810a-07f11e4fd052-operator-scripts\") pod \"watcher-8e65-account-create-update-8btw6\" (UID: \"df1c85a8-48d9-4697-810a-07f11e4fd052\") " pod="watcher-kuttl-default/watcher-8e65-account-create-update-8btw6" Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.991516 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9db8g\" (UniqueName: \"kubernetes.io/projected/df1c85a8-48d9-4697-810a-07f11e4fd052-kube-api-access-9db8g\") pod \"watcher-8e65-account-create-update-8btw6\" (UID: \"df1c85a8-48d9-4697-810a-07f11e4fd052\") " pod="watcher-kuttl-default/watcher-8e65-account-create-update-8btw6" Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.991597 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlbt6\" (UniqueName: \"kubernetes.io/projected/3637c9fb-4fbe-4b09-89ca-e6b78e90fa03-kube-api-access-qlbt6\") pod \"watcher-db-create-zcm7m\" (UID: \"3637c9fb-4fbe-4b09-89ca-e6b78e90fa03\") " pod="watcher-kuttl-default/watcher-db-create-zcm7m" Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.991666 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3637c9fb-4fbe-4b09-89ca-e6b78e90fa03-operator-scripts\") pod \"watcher-db-create-zcm7m\" (UID: \"3637c9fb-4fbe-4b09-89ca-e6b78e90fa03\") " pod="watcher-kuttl-default/watcher-db-create-zcm7m" Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.991721 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df1c85a8-48d9-4697-810a-07f11e4fd052-operator-scripts\") pod \"watcher-8e65-account-create-update-8btw6\" (UID: \"df1c85a8-48d9-4697-810a-07f11e4fd052\") " pod="watcher-kuttl-default/watcher-8e65-account-create-update-8btw6" Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.992494 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3637c9fb-4fbe-4b09-89ca-e6b78e90fa03-operator-scripts\") pod \"watcher-db-create-zcm7m\" (UID: \"3637c9fb-4fbe-4b09-89ca-e6b78e90fa03\") " pod="watcher-kuttl-default/watcher-db-create-zcm7m" Feb 17 09:05:54 crc kubenswrapper[4813]: I0217 09:05:54.992600 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df1c85a8-48d9-4697-810a-07f11e4fd052-operator-scripts\") pod \"watcher-8e65-account-create-update-8btw6\" (UID: \"df1c85a8-48d9-4697-810a-07f11e4fd052\") " pod="watcher-kuttl-default/watcher-8e65-account-create-update-8btw6" Feb 17 09:05:55 crc kubenswrapper[4813]: I0217 09:05:55.010068 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlbt6\" (UniqueName: \"kubernetes.io/projected/3637c9fb-4fbe-4b09-89ca-e6b78e90fa03-kube-api-access-qlbt6\") pod \"watcher-db-create-zcm7m\" (UID: \"3637c9fb-4fbe-4b09-89ca-e6b78e90fa03\") " pod="watcher-kuttl-default/watcher-db-create-zcm7m" Feb 17 09:05:55 crc kubenswrapper[4813]: I0217 09:05:55.010836 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9db8g\" (UniqueName: \"kubernetes.io/projected/df1c85a8-48d9-4697-810a-07f11e4fd052-kube-api-access-9db8g\") pod \"watcher-8e65-account-create-update-8btw6\" (UID: \"df1c85a8-48d9-4697-810a-07f11e4fd052\") " pod="watcher-kuttl-default/watcher-8e65-account-create-update-8btw6" Feb 17 09:05:55 crc kubenswrapper[4813]: I0217 09:05:55.066020 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c7cf6603-0290-498b-9746-2e6bf08cd6a6","Type":"ContainerStarted","Data":"4c3f8b1196a80dacc12473167798e50297715ecb8e649f5c60f1090a1fd9e79d"} Feb 17 09:05:55 crc kubenswrapper[4813]: I0217 09:05:55.098352 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-zcm7m" Feb 17 09:05:55 crc kubenswrapper[4813]: I0217 09:05:55.104606 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8e65-account-create-update-8btw6" Feb 17 09:05:55 crc kubenswrapper[4813]: I0217 09:05:55.120218 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8394f8aa-6ef4-4397-ab32-a951eb0c8334" path="/var/lib/kubelet/pods/8394f8aa-6ef4-4397-ab32-a951eb0c8334/volumes" Feb 17 09:05:55 crc kubenswrapper[4813]: I0217 09:05:55.580322 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-8e65-account-create-update-8btw6"] Feb 17 09:05:55 crc kubenswrapper[4813]: I0217 09:05:55.673573 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-zcm7m"] Feb 17 09:05:56 crc kubenswrapper[4813]: I0217 09:05:56.076160 4813 generic.go:334] "Generic (PLEG): container finished" podID="3637c9fb-4fbe-4b09-89ca-e6b78e90fa03" containerID="bf9070105d0566c671684053c8d7b044c62f7e7fcae0686018656c9385189999" exitCode=0 Feb 17 09:05:56 crc kubenswrapper[4813]: I0217 09:05:56.076362 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-zcm7m" event={"ID":"3637c9fb-4fbe-4b09-89ca-e6b78e90fa03","Type":"ContainerDied","Data":"bf9070105d0566c671684053c8d7b044c62f7e7fcae0686018656c9385189999"} Feb 17 09:05:56 crc kubenswrapper[4813]: I0217 09:05:56.076584 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-zcm7m" event={"ID":"3637c9fb-4fbe-4b09-89ca-e6b78e90fa03","Type":"ContainerStarted","Data":"98cfb22daf0aa9924405dcd937bc8535b7b96522b29b042ca90014105e8258ed"} Feb 17 09:05:56 crc kubenswrapper[4813]: I0217 09:05:56.079123 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c7cf6603-0290-498b-9746-2e6bf08cd6a6","Type":"ContainerStarted","Data":"3607ea5e04b2fcda75dc5c54abfc52448d41ebf7ee5f472f179d3237c0192789"} Feb 17 09:05:56 crc kubenswrapper[4813]: I0217 09:05:56.080867 4813 generic.go:334] "Generic (PLEG): container finished" podID="df1c85a8-48d9-4697-810a-07f11e4fd052" containerID="a9d0d3e429cb0c338b619256b79cad9bdb6a28e106283ef12137736ea16d47de" exitCode=0 Feb 17 09:05:56 crc kubenswrapper[4813]: I0217 09:05:56.080975 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-8e65-account-create-update-8btw6" event={"ID":"df1c85a8-48d9-4697-810a-07f11e4fd052","Type":"ContainerDied","Data":"a9d0d3e429cb0c338b619256b79cad9bdb6a28e106283ef12137736ea16d47de"} Feb 17 09:05:56 crc kubenswrapper[4813]: I0217 09:05:56.081070 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-8e65-account-create-update-8btw6" event={"ID":"df1c85a8-48d9-4697-810a-07f11e4fd052","Type":"ContainerStarted","Data":"735f2bceaf6f9d0b7d4b60a23a83c22d16cc635687e49acb9571bb53a57546b5"} Feb 17 09:05:57 crc kubenswrapper[4813]: I0217 09:05:57.101542 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c7cf6603-0290-498b-9746-2e6bf08cd6a6","Type":"ContainerStarted","Data":"8be1f7bc274cb939b995982b62147ee1080339ed74f408b49a34a0a913ec95eb"} Feb 17 09:05:57 crc kubenswrapper[4813]: I0217 09:05:57.636278 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-zcm7m" Feb 17 09:05:57 crc kubenswrapper[4813]: I0217 09:05:57.642838 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8e65-account-create-update-8btw6" Feb 17 09:05:57 crc kubenswrapper[4813]: I0217 09:05:57.739617 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3637c9fb-4fbe-4b09-89ca-e6b78e90fa03-operator-scripts\") pod \"3637c9fb-4fbe-4b09-89ca-e6b78e90fa03\" (UID: \"3637c9fb-4fbe-4b09-89ca-e6b78e90fa03\") " Feb 17 09:05:57 crc kubenswrapper[4813]: I0217 09:05:57.739918 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlbt6\" (UniqueName: \"kubernetes.io/projected/3637c9fb-4fbe-4b09-89ca-e6b78e90fa03-kube-api-access-qlbt6\") pod \"3637c9fb-4fbe-4b09-89ca-e6b78e90fa03\" (UID: \"3637c9fb-4fbe-4b09-89ca-e6b78e90fa03\") " Feb 17 09:05:57 crc kubenswrapper[4813]: I0217 09:05:57.741793 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3637c9fb-4fbe-4b09-89ca-e6b78e90fa03-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3637c9fb-4fbe-4b09-89ca-e6b78e90fa03" (UID: "3637c9fb-4fbe-4b09-89ca-e6b78e90fa03"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:05:57 crc kubenswrapper[4813]: I0217 09:05:57.744457 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3637c9fb-4fbe-4b09-89ca-e6b78e90fa03-kube-api-access-qlbt6" (OuterVolumeSpecName: "kube-api-access-qlbt6") pod "3637c9fb-4fbe-4b09-89ca-e6b78e90fa03" (UID: "3637c9fb-4fbe-4b09-89ca-e6b78e90fa03"). InnerVolumeSpecName "kube-api-access-qlbt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:05:57 crc kubenswrapper[4813]: I0217 09:05:57.841298 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9db8g\" (UniqueName: \"kubernetes.io/projected/df1c85a8-48d9-4697-810a-07f11e4fd052-kube-api-access-9db8g\") pod \"df1c85a8-48d9-4697-810a-07f11e4fd052\" (UID: \"df1c85a8-48d9-4697-810a-07f11e4fd052\") " Feb 17 09:05:57 crc kubenswrapper[4813]: I0217 09:05:57.841366 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df1c85a8-48d9-4697-810a-07f11e4fd052-operator-scripts\") pod \"df1c85a8-48d9-4697-810a-07f11e4fd052\" (UID: \"df1c85a8-48d9-4697-810a-07f11e4fd052\") " Feb 17 09:05:57 crc kubenswrapper[4813]: I0217 09:05:57.841708 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3637c9fb-4fbe-4b09-89ca-e6b78e90fa03-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:57 crc kubenswrapper[4813]: I0217 09:05:57.841724 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlbt6\" (UniqueName: \"kubernetes.io/projected/3637c9fb-4fbe-4b09-89ca-e6b78e90fa03-kube-api-access-qlbt6\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:57 crc kubenswrapper[4813]: I0217 09:05:57.841898 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df1c85a8-48d9-4697-810a-07f11e4fd052-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df1c85a8-48d9-4697-810a-07f11e4fd052" (UID: "df1c85a8-48d9-4697-810a-07f11e4fd052"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:05:57 crc kubenswrapper[4813]: I0217 09:05:57.847414 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1c85a8-48d9-4697-810a-07f11e4fd052-kube-api-access-9db8g" (OuterVolumeSpecName: "kube-api-access-9db8g") pod "df1c85a8-48d9-4697-810a-07f11e4fd052" (UID: "df1c85a8-48d9-4697-810a-07f11e4fd052"). InnerVolumeSpecName "kube-api-access-9db8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:05:57 crc kubenswrapper[4813]: I0217 09:05:57.943301 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9db8g\" (UniqueName: \"kubernetes.io/projected/df1c85a8-48d9-4697-810a-07f11e4fd052-kube-api-access-9db8g\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:57 crc kubenswrapper[4813]: I0217 09:05:57.943348 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df1c85a8-48d9-4697-810a-07f11e4fd052-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:05:58 crc kubenswrapper[4813]: I0217 09:05:58.124742 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-zcm7m" event={"ID":"3637c9fb-4fbe-4b09-89ca-e6b78e90fa03","Type":"ContainerDied","Data":"98cfb22daf0aa9924405dcd937bc8535b7b96522b29b042ca90014105e8258ed"} Feb 17 09:05:58 crc kubenswrapper[4813]: I0217 09:05:58.124777 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98cfb22daf0aa9924405dcd937bc8535b7b96522b29b042ca90014105e8258ed" Feb 17 09:05:58 crc kubenswrapper[4813]: I0217 09:05:58.124825 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-zcm7m" Feb 17 09:05:58 crc kubenswrapper[4813]: I0217 09:05:58.136141 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-8e65-account-create-update-8btw6" event={"ID":"df1c85a8-48d9-4697-810a-07f11e4fd052","Type":"ContainerDied","Data":"735f2bceaf6f9d0b7d4b60a23a83c22d16cc635687e49acb9571bb53a57546b5"} Feb 17 09:05:58 crc kubenswrapper[4813]: I0217 09:05:58.136185 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="735f2bceaf6f9d0b7d4b60a23a83c22d16cc635687e49acb9571bb53a57546b5" Feb 17 09:05:58 crc kubenswrapper[4813]: I0217 09:05:58.136251 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8e65-account-create-update-8btw6" Feb 17 09:05:58 crc kubenswrapper[4813]: I0217 09:05:58.145261 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c7cf6603-0290-498b-9746-2e6bf08cd6a6","Type":"ContainerStarted","Data":"aa33c7d9f800d07084d5ab1b6e22065ad841935c6034af4bfb4805d0c750742f"} Feb 17 09:05:58 crc kubenswrapper[4813]: I0217 09:05:58.145564 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:05:58 crc kubenswrapper[4813]: I0217 09:05:58.183804 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.7435640700000001 podStartE2EDuration="5.183786509s" podCreationTimestamp="2026-02-17 09:05:53 +0000 UTC" firstStartedPulling="2026-02-17 09:05:53.96902529 +0000 UTC m=+1501.629786513" lastFinishedPulling="2026-02-17 09:05:57.409247729 +0000 UTC m=+1505.070008952" observedRunningTime="2026-02-17 09:05:58.179528587 +0000 UTC m=+1505.840289810" watchObservedRunningTime="2026-02-17 09:05:58.183786509 +0000 UTC m=+1505.844547732" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.378405 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s"] Feb 17 09:06:00 crc kubenswrapper[4813]: E0217 09:06:00.379367 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1c85a8-48d9-4697-810a-07f11e4fd052" containerName="mariadb-account-create-update" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.379381 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1c85a8-48d9-4697-810a-07f11e4fd052" containerName="mariadb-account-create-update" Feb 17 09:06:00 crc kubenswrapper[4813]: E0217 09:06:00.379408 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3637c9fb-4fbe-4b09-89ca-e6b78e90fa03" containerName="mariadb-database-create" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.379416 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3637c9fb-4fbe-4b09-89ca-e6b78e90fa03" containerName="mariadb-database-create" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.379579 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1c85a8-48d9-4697-810a-07f11e4fd052" containerName="mariadb-account-create-update" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.379589 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3637c9fb-4fbe-4b09-89ca-e6b78e90fa03" containerName="mariadb-database-create" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.380079 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.382185 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.382777 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-gv8rg" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.392341 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s"] Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.481455 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-2dw6s\" (UID: \"499f7c94-130a-4d1b-bb86-a42f9a20b869\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.481507 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-db-sync-config-data\") pod \"watcher-kuttl-db-sync-2dw6s\" (UID: \"499f7c94-130a-4d1b-bb86-a42f9a20b869\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.481645 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-config-data\") pod \"watcher-kuttl-db-sync-2dw6s\" (UID: \"499f7c94-130a-4d1b-bb86-a42f9a20b869\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.481696 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d8lp\" (UniqueName: \"kubernetes.io/projected/499f7c94-130a-4d1b-bb86-a42f9a20b869-kube-api-access-8d8lp\") pod \"watcher-kuttl-db-sync-2dw6s\" (UID: \"499f7c94-130a-4d1b-bb86-a42f9a20b869\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.582940 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-2dw6s\" (UID: \"499f7c94-130a-4d1b-bb86-a42f9a20b869\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.582978 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-db-sync-config-data\") pod \"watcher-kuttl-db-sync-2dw6s\" (UID: \"499f7c94-130a-4d1b-bb86-a42f9a20b869\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.583060 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-config-data\") pod \"watcher-kuttl-db-sync-2dw6s\" (UID: \"499f7c94-130a-4d1b-bb86-a42f9a20b869\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.583086 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d8lp\" (UniqueName: \"kubernetes.io/projected/499f7c94-130a-4d1b-bb86-a42f9a20b869-kube-api-access-8d8lp\") pod \"watcher-kuttl-db-sync-2dw6s\" (UID: \"499f7c94-130a-4d1b-bb86-a42f9a20b869\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.599379 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-config-data\") pod \"watcher-kuttl-db-sync-2dw6s\" (UID: \"499f7c94-130a-4d1b-bb86-a42f9a20b869\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.599623 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-2dw6s\" (UID: \"499f7c94-130a-4d1b-bb86-a42f9a20b869\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.599937 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-db-sync-config-data\") pod \"watcher-kuttl-db-sync-2dw6s\" (UID: \"499f7c94-130a-4d1b-bb86-a42f9a20b869\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.605523 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d8lp\" (UniqueName: \"kubernetes.io/projected/499f7c94-130a-4d1b-bb86-a42f9a20b869-kube-api-access-8d8lp\") pod \"watcher-kuttl-db-sync-2dw6s\" (UID: \"499f7c94-130a-4d1b-bb86-a42f9a20b869\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" Feb 17 09:06:00 crc kubenswrapper[4813]: I0217 09:06:00.701124 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" Feb 17 09:06:01 crc kubenswrapper[4813]: I0217 09:06:01.191090 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s"] Feb 17 09:06:02 crc kubenswrapper[4813]: I0217 09:06:02.175373 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" event={"ID":"499f7c94-130a-4d1b-bb86-a42f9a20b869","Type":"ContainerStarted","Data":"687a2b900e7051ee434fcbcf534a0602b0ffb38c9f4b802da5b38745a722ae50"} Feb 17 09:06:02 crc kubenswrapper[4813]: I0217 09:06:02.175747 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" event={"ID":"499f7c94-130a-4d1b-bb86-a42f9a20b869","Type":"ContainerStarted","Data":"df0b984664079abecbe3d02eeafc61fab3e27267bdf3eb2298dd028df0d9ba30"} Feb 17 09:06:02 crc kubenswrapper[4813]: I0217 09:06:02.199955 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" podStartSLOduration=2.199937103 podStartE2EDuration="2.199937103s" podCreationTimestamp="2026-02-17 09:06:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:06:02.191498282 +0000 UTC m=+1509.852259525" watchObservedRunningTime="2026-02-17 09:06:02.199937103 +0000 UTC m=+1509.860698316" Feb 17 09:06:04 crc kubenswrapper[4813]: I0217 09:06:04.197749 4813 generic.go:334] "Generic (PLEG): container finished" podID="499f7c94-130a-4d1b-bb86-a42f9a20b869" containerID="687a2b900e7051ee434fcbcf534a0602b0ffb38c9f4b802da5b38745a722ae50" exitCode=0 Feb 17 09:06:04 crc kubenswrapper[4813]: I0217 09:06:04.197833 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" event={"ID":"499f7c94-130a-4d1b-bb86-a42f9a20b869","Type":"ContainerDied","Data":"687a2b900e7051ee434fcbcf534a0602b0ffb38c9f4b802da5b38745a722ae50"} Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.165085 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.165144 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.165188 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.165774 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58"} pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.165866 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" containerID="cri-o://4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" gracePeriod=600 Feb 17 09:06:05 crc kubenswrapper[4813]: E0217 09:06:05.291543 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.624237 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.668221 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-db-sync-config-data\") pod \"499f7c94-130a-4d1b-bb86-a42f9a20b869\" (UID: \"499f7c94-130a-4d1b-bb86-a42f9a20b869\") " Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.668285 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d8lp\" (UniqueName: \"kubernetes.io/projected/499f7c94-130a-4d1b-bb86-a42f9a20b869-kube-api-access-8d8lp\") pod \"499f7c94-130a-4d1b-bb86-a42f9a20b869\" (UID: \"499f7c94-130a-4d1b-bb86-a42f9a20b869\") " Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.668457 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-combined-ca-bundle\") pod \"499f7c94-130a-4d1b-bb86-a42f9a20b869\" (UID: \"499f7c94-130a-4d1b-bb86-a42f9a20b869\") " Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.668565 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-config-data\") pod \"499f7c94-130a-4d1b-bb86-a42f9a20b869\" (UID: \"499f7c94-130a-4d1b-bb86-a42f9a20b869\") " Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.673971 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "499f7c94-130a-4d1b-bb86-a42f9a20b869" (UID: "499f7c94-130a-4d1b-bb86-a42f9a20b869"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.677946 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/499f7c94-130a-4d1b-bb86-a42f9a20b869-kube-api-access-8d8lp" (OuterVolumeSpecName: "kube-api-access-8d8lp") pod "499f7c94-130a-4d1b-bb86-a42f9a20b869" (UID: "499f7c94-130a-4d1b-bb86-a42f9a20b869"). InnerVolumeSpecName "kube-api-access-8d8lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.702439 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "499f7c94-130a-4d1b-bb86-a42f9a20b869" (UID: "499f7c94-130a-4d1b-bb86-a42f9a20b869"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.741386 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-config-data" (OuterVolumeSpecName: "config-data") pod "499f7c94-130a-4d1b-bb86-a42f9a20b869" (UID: "499f7c94-130a-4d1b-bb86-a42f9a20b869"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.770001 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.770040 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.770054 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/499f7c94-130a-4d1b-bb86-a42f9a20b869-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:05 crc kubenswrapper[4813]: I0217 09:06:05.770065 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d8lp\" (UniqueName: \"kubernetes.io/projected/499f7c94-130a-4d1b-bb86-a42f9a20b869-kube-api-access-8d8lp\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.222115 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" event={"ID":"499f7c94-130a-4d1b-bb86-a42f9a20b869","Type":"ContainerDied","Data":"df0b984664079abecbe3d02eeafc61fab3e27267bdf3eb2298dd028df0d9ba30"} Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.222668 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df0b984664079abecbe3d02eeafc61fab3e27267bdf3eb2298dd028df0d9ba30" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.222139 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.226538 4813 generic.go:334] "Generic (PLEG): container finished" podID="3a6ba827-b08b-4163-b067-d9adb119398d" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" exitCode=0 Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.226598 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerDied","Data":"4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58"} Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.226675 4813 scope.go:117] "RemoveContainer" containerID="e8f1831aa866d7234a4f9752273e3fc18af2abeede207be480bb974f39d90c1d" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.227147 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:06:06 crc kubenswrapper[4813]: E0217 09:06:06.227544 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.548686 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:06:06 crc kubenswrapper[4813]: E0217 09:06:06.549552 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499f7c94-130a-4d1b-bb86-a42f9a20b869" containerName="watcher-kuttl-db-sync" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.549617 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="499f7c94-130a-4d1b-bb86-a42f9a20b869" containerName="watcher-kuttl-db-sync" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.549844 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="499f7c94-130a-4d1b-bb86-a42f9a20b869" containerName="watcher-kuttl-db-sync" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.550733 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.554952 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-gv8rg" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.555222 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.555428 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.556283 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.557348 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.567097 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.573510 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.585073 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96g66\" (UniqueName: \"kubernetes.io/projected/1bccf2cb-4074-4c04-9659-9557d61f79d1-kube-api-access-96g66\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.585251 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.585334 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bccf2cb-4074-4c04-9659-9557d61f79d1-logs\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.585456 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.585556 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.585622 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.653747 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.655461 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.659672 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.665675 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.687492 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.687567 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.687602 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92g2j\" (UniqueName: \"kubernetes.io/projected/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-kube-api-access-92g2j\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.687646 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.687678 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dngzj\" (UniqueName: \"kubernetes.io/projected/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-kube-api-access-dngzj\") pod \"watcher-kuttl-applier-0\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.687719 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.687746 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.687775 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.687842 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.687916 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.687953 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96g66\" (UniqueName: \"kubernetes.io/projected/1bccf2cb-4074-4c04-9659-9557d61f79d1-kube-api-access-96g66\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.687984 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.688008 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bccf2cb-4074-4c04-9659-9557d61f79d1-logs\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.688034 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.688064 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.688088 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.688114 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.689156 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bccf2cb-4074-4c04-9659-9557d61f79d1-logs\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.691435 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.696037 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.709937 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.709988 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.709937 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96g66\" (UniqueName: \"kubernetes.io/projected/1bccf2cb-4074-4c04-9659-9557d61f79d1-kube-api-access-96g66\") pod \"watcher-kuttl-api-0\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.789251 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.789389 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.789415 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.789431 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.789459 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.789491 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.789520 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92g2j\" (UniqueName: \"kubernetes.io/projected/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-kube-api-access-92g2j\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.789557 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.789583 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dngzj\" (UniqueName: \"kubernetes.io/projected/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-kube-api-access-dngzj\") pod \"watcher-kuttl-applier-0\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.789620 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.789650 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.790406 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.790734 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.794515 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.794534 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.795872 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.796529 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.798593 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.799806 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.801964 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.810043 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dngzj\" (UniqueName: \"kubernetes.io/projected/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-kube-api-access-dngzj\") pod \"watcher-kuttl-applier-0\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.810797 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92g2j\" (UniqueName: \"kubernetes.io/projected/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-kube-api-access-92g2j\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.867171 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.875179 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:06 crc kubenswrapper[4813]: I0217 09:06:06.997075 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:07 crc kubenswrapper[4813]: I0217 09:06:07.257441 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:06:07 crc kubenswrapper[4813]: W0217 09:06:07.258294 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdc93a4d_1e9a_46a0_9a11_c72e521930c6.slice/crio-d53fc02fa86a30c413ba6ac2c76cde25bce935c70aeaf74bc4182766a1885b6a WatchSource:0}: Error finding container d53fc02fa86a30c413ba6ac2c76cde25bce935c70aeaf74bc4182766a1885b6a: Status 404 returned error can't find the container with id d53fc02fa86a30c413ba6ac2c76cde25bce935c70aeaf74bc4182766a1885b6a Feb 17 09:06:07 crc kubenswrapper[4813]: I0217 09:06:07.311415 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:06:07 crc kubenswrapper[4813]: W0217 09:06:07.318428 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b1a8fbc_6696_4368_a2b4_ac8714a6ba54.slice/crio-ec185479d44ccc1a7ce331b8c2d97ba946a04d296ed71557a4dd52b516983750 WatchSource:0}: Error finding container ec185479d44ccc1a7ce331b8c2d97ba946a04d296ed71557a4dd52b516983750: Status 404 returned error can't find the container with id ec185479d44ccc1a7ce331b8c2d97ba946a04d296ed71557a4dd52b516983750 Feb 17 09:06:07 crc kubenswrapper[4813]: I0217 09:06:07.407886 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:06:07 crc kubenswrapper[4813]: W0217 09:06:07.416390 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bccf2cb_4074_4c04_9659_9557d61f79d1.slice/crio-919043836a1f151b8dae1666a1af54d1048f1790efa6ce19407ab4a57151a452 WatchSource:0}: Error finding container 919043836a1f151b8dae1666a1af54d1048f1790efa6ce19407ab4a57151a452: Status 404 returned error can't find the container with id 919043836a1f151b8dae1666a1af54d1048f1790efa6ce19407ab4a57151a452 Feb 17 09:06:08 crc kubenswrapper[4813]: I0217 09:06:08.247477 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"fdc93a4d-1e9a-46a0-9a11-c72e521930c6","Type":"ContainerStarted","Data":"79da9b2f36d69848565b5febc4540e3804b4a1e93595815639c6a95b509a6b11"} Feb 17 09:06:08 crc kubenswrapper[4813]: I0217 09:06:08.249062 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"fdc93a4d-1e9a-46a0-9a11-c72e521930c6","Type":"ContainerStarted","Data":"d53fc02fa86a30c413ba6ac2c76cde25bce935c70aeaf74bc4182766a1885b6a"} Feb 17 09:06:08 crc kubenswrapper[4813]: I0217 09:06:08.252061 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1bccf2cb-4074-4c04-9659-9557d61f79d1","Type":"ContainerStarted","Data":"09aaf5b0ca3c41f171753a6d37969366ccfd1424b9ce9c9ed60e38fc59adb565"} Feb 17 09:06:08 crc kubenswrapper[4813]: I0217 09:06:08.252099 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1bccf2cb-4074-4c04-9659-9557d61f79d1","Type":"ContainerStarted","Data":"3ab28b9eacc73e678d4edb6fe8589d03e5875a666afb2bad3d322b6e6f3376c6"} Feb 17 09:06:08 crc kubenswrapper[4813]: I0217 09:06:08.252114 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1bccf2cb-4074-4c04-9659-9557d61f79d1","Type":"ContainerStarted","Data":"919043836a1f151b8dae1666a1af54d1048f1790efa6ce19407ab4a57151a452"} Feb 17 09:06:08 crc kubenswrapper[4813]: I0217 09:06:08.252596 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:08 crc kubenswrapper[4813]: I0217 09:06:08.253612 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54","Type":"ContainerStarted","Data":"6bd27d863be9b32f9d36bbcd87e4910ff6275b01dcc7822554444a71abb105e9"} Feb 17 09:06:08 crc kubenswrapper[4813]: I0217 09:06:08.253641 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54","Type":"ContainerStarted","Data":"ec185479d44ccc1a7ce331b8c2d97ba946a04d296ed71557a4dd52b516983750"} Feb 17 09:06:08 crc kubenswrapper[4813]: I0217 09:06:08.278584 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.278566452 podStartE2EDuration="2.278566452s" podCreationTimestamp="2026-02-17 09:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:06:08.272642252 +0000 UTC m=+1515.933403475" watchObservedRunningTime="2026-02-17 09:06:08.278566452 +0000 UTC m=+1515.939327675" Feb 17 09:06:08 crc kubenswrapper[4813]: I0217 09:06:08.307636 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.307615123 podStartE2EDuration="2.307615123s" podCreationTimestamp="2026-02-17 09:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:06:08.299153111 +0000 UTC m=+1515.959914334" watchObservedRunningTime="2026-02-17 09:06:08.307615123 +0000 UTC m=+1515.968376356" Feb 17 09:06:08 crc kubenswrapper[4813]: I0217 09:06:08.318521 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.318504775 podStartE2EDuration="2.318504775s" podCreationTimestamp="2026-02-17 09:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:06:08.315941372 +0000 UTC m=+1515.976702595" watchObservedRunningTime="2026-02-17 09:06:08.318504775 +0000 UTC m=+1515.979265998" Feb 17 09:06:10 crc kubenswrapper[4813]: I0217 09:06:10.228427 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:11 crc kubenswrapper[4813]: I0217 09:06:11.867934 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:12 crc kubenswrapper[4813]: I0217 09:06:12.000442 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:16 crc kubenswrapper[4813]: I0217 09:06:16.867756 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:16 crc kubenswrapper[4813]: I0217 09:06:16.876025 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:16 crc kubenswrapper[4813]: I0217 09:06:16.878048 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:16 crc kubenswrapper[4813]: I0217 09:06:16.907629 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:16 crc kubenswrapper[4813]: I0217 09:06:16.998154 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:17 crc kubenswrapper[4813]: I0217 09:06:17.019068 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:17 crc kubenswrapper[4813]: I0217 09:06:17.337126 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:17 crc kubenswrapper[4813]: I0217 09:06:17.346866 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:17 crc kubenswrapper[4813]: I0217 09:06:17.362367 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:17 crc kubenswrapper[4813]: I0217 09:06:17.369795 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:18 crc kubenswrapper[4813]: I0217 09:06:18.111820 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:06:18 crc kubenswrapper[4813]: E0217 09:06:18.112055 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:06:18 crc kubenswrapper[4813]: I0217 09:06:18.707840 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:18 crc kubenswrapper[4813]: I0217 09:06:18.708347 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="ceilometer-central-agent" containerID="cri-o://4c3f8b1196a80dacc12473167798e50297715ecb8e649f5c60f1090a1fd9e79d" gracePeriod=30 Feb 17 09:06:18 crc kubenswrapper[4813]: I0217 09:06:18.708441 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="sg-core" containerID="cri-o://8be1f7bc274cb939b995982b62147ee1080339ed74f408b49a34a0a913ec95eb" gracePeriod=30 Feb 17 09:06:18 crc kubenswrapper[4813]: I0217 09:06:18.708592 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="ceilometer-notification-agent" containerID="cri-o://3607ea5e04b2fcda75dc5c54abfc52448d41ebf7ee5f472f179d3237c0192789" gracePeriod=30 Feb 17 09:06:18 crc kubenswrapper[4813]: I0217 09:06:18.708641 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="proxy-httpd" containerID="cri-o://aa33c7d9f800d07084d5ab1b6e22065ad841935c6034af4bfb4805d0c750742f" gracePeriod=30 Feb 17 09:06:18 crc kubenswrapper[4813]: I0217 09:06:18.719358 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.188:3000/\": read tcp 10.217.0.2:36680->10.217.0.188:3000: read: connection reset by peer" Feb 17 09:06:18 crc kubenswrapper[4813]: E0217 09:06:18.952502 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7cf6603_0290_498b_9746_2e6bf08cd6a6.slice/crio-conmon-aa33c7d9f800d07084d5ab1b6e22065ad841935c6034af4bfb4805d0c750742f.scope\": RecentStats: unable to find data in memory cache]" Feb 17 09:06:19 crc kubenswrapper[4813]: I0217 09:06:19.356657 4813 generic.go:334] "Generic (PLEG): container finished" podID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerID="aa33c7d9f800d07084d5ab1b6e22065ad841935c6034af4bfb4805d0c750742f" exitCode=0 Feb 17 09:06:19 crc kubenswrapper[4813]: I0217 09:06:19.356701 4813 generic.go:334] "Generic (PLEG): container finished" podID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerID="8be1f7bc274cb939b995982b62147ee1080339ed74f408b49a34a0a913ec95eb" exitCode=2 Feb 17 09:06:19 crc kubenswrapper[4813]: I0217 09:06:19.356712 4813 generic.go:334] "Generic (PLEG): container finished" podID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerID="4c3f8b1196a80dacc12473167798e50297715ecb8e649f5c60f1090a1fd9e79d" exitCode=0 Feb 17 09:06:19 crc kubenswrapper[4813]: I0217 09:06:19.356731 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c7cf6603-0290-498b-9746-2e6bf08cd6a6","Type":"ContainerDied","Data":"aa33c7d9f800d07084d5ab1b6e22065ad841935c6034af4bfb4805d0c750742f"} Feb 17 09:06:19 crc kubenswrapper[4813]: I0217 09:06:19.356776 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c7cf6603-0290-498b-9746-2e6bf08cd6a6","Type":"ContainerDied","Data":"8be1f7bc274cb939b995982b62147ee1080339ed74f408b49a34a0a913ec95eb"} Feb 17 09:06:19 crc kubenswrapper[4813]: I0217 09:06:19.356787 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c7cf6603-0290-498b-9746-2e6bf08cd6a6","Type":"ContainerDied","Data":"4c3f8b1196a80dacc12473167798e50297715ecb8e649f5c60f1090a1fd9e79d"} Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.029585 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.214387 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7cf6603-0290-498b-9746-2e6bf08cd6a6-log-httpd\") pod \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.214508 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-ceilometer-tls-certs\") pod \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.214548 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7cf6603-0290-498b-9746-2e6bf08cd6a6-run-httpd\") pod \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.214594 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-scripts\") pod \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.214620 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-config-data\") pod \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.214683 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlvfr\" (UniqueName: \"kubernetes.io/projected/c7cf6603-0290-498b-9746-2e6bf08cd6a6-kube-api-access-rlvfr\") pod \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.214774 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-combined-ca-bundle\") pod \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.214874 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-sg-core-conf-yaml\") pod \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\" (UID: \"c7cf6603-0290-498b-9746-2e6bf08cd6a6\") " Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.214949 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7cf6603-0290-498b-9746-2e6bf08cd6a6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c7cf6603-0290-498b-9746-2e6bf08cd6a6" (UID: "c7cf6603-0290-498b-9746-2e6bf08cd6a6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.215373 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7cf6603-0290-498b-9746-2e6bf08cd6a6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c7cf6603-0290-498b-9746-2e6bf08cd6a6" (UID: "c7cf6603-0290-498b-9746-2e6bf08cd6a6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.216586 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7cf6603-0290-498b-9746-2e6bf08cd6a6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.216659 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7cf6603-0290-498b-9746-2e6bf08cd6a6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.222218 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7cf6603-0290-498b-9746-2e6bf08cd6a6-kube-api-access-rlvfr" (OuterVolumeSpecName: "kube-api-access-rlvfr") pod "c7cf6603-0290-498b-9746-2e6bf08cd6a6" (UID: "c7cf6603-0290-498b-9746-2e6bf08cd6a6"). InnerVolumeSpecName "kube-api-access-rlvfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.223031 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-scripts" (OuterVolumeSpecName: "scripts") pod "c7cf6603-0290-498b-9746-2e6bf08cd6a6" (UID: "c7cf6603-0290-498b-9746-2e6bf08cd6a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.265700 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c7cf6603-0290-498b-9746-2e6bf08cd6a6" (UID: "c7cf6603-0290-498b-9746-2e6bf08cd6a6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.293056 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c7cf6603-0290-498b-9746-2e6bf08cd6a6" (UID: "c7cf6603-0290-498b-9746-2e6bf08cd6a6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.309613 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7cf6603-0290-498b-9746-2e6bf08cd6a6" (UID: "c7cf6603-0290-498b-9746-2e6bf08cd6a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.318771 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.318838 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.318857 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.318869 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlvfr\" (UniqueName: \"kubernetes.io/projected/c7cf6603-0290-498b-9746-2e6bf08cd6a6-kube-api-access-rlvfr\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.318881 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.327234 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-config-data" (OuterVolumeSpecName: "config-data") pod "c7cf6603-0290-498b-9746-2e6bf08cd6a6" (UID: "c7cf6603-0290-498b-9746-2e6bf08cd6a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.389324 4813 generic.go:334] "Generic (PLEG): container finished" podID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerID="3607ea5e04b2fcda75dc5c54abfc52448d41ebf7ee5f472f179d3237c0192789" exitCode=0 Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.389512 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c7cf6603-0290-498b-9746-2e6bf08cd6a6","Type":"ContainerDied","Data":"3607ea5e04b2fcda75dc5c54abfc52448d41ebf7ee5f472f179d3237c0192789"} Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.389719 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c7cf6603-0290-498b-9746-2e6bf08cd6a6","Type":"ContainerDied","Data":"9b89a029aec897dc2dd37fb33e709dd5d42e9659c815ef58e633641d02370f2b"} Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.389729 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.389748 4813 scope.go:117] "RemoveContainer" containerID="aa33c7d9f800d07084d5ab1b6e22065ad841935c6034af4bfb4805d0c750742f" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.420983 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7cf6603-0290-498b-9746-2e6bf08cd6a6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.425046 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.427848 4813 scope.go:117] "RemoveContainer" containerID="8be1f7bc274cb939b995982b62147ee1080339ed74f408b49a34a0a913ec95eb" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.433349 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.453480 4813 scope.go:117] "RemoveContainer" containerID="3607ea5e04b2fcda75dc5c54abfc52448d41ebf7ee5f472f179d3237c0192789" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.461737 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:22 crc kubenswrapper[4813]: E0217 09:06:22.462932 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="sg-core" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.462950 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="sg-core" Feb 17 09:06:22 crc kubenswrapper[4813]: E0217 09:06:22.462979 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="ceilometer-central-agent" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.462988 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="ceilometer-central-agent" Feb 17 09:06:22 crc kubenswrapper[4813]: E0217 09:06:22.463008 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="proxy-httpd" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.463015 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="proxy-httpd" Feb 17 09:06:22 crc kubenswrapper[4813]: E0217 09:06:22.463030 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="ceilometer-notification-agent" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.463038 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="ceilometer-notification-agent" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.463206 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="ceilometer-central-agent" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.463225 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="ceilometer-notification-agent" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.463238 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="proxy-httpd" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.463256 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" containerName="sg-core" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.465016 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.467533 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.467877 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.468110 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.479682 4813 scope.go:117] "RemoveContainer" containerID="4c3f8b1196a80dacc12473167798e50297715ecb8e649f5c60f1090a1fd9e79d" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.489432 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.525529 4813 scope.go:117] "RemoveContainer" containerID="aa33c7d9f800d07084d5ab1b6e22065ad841935c6034af4bfb4805d0c750742f" Feb 17 09:06:22 crc kubenswrapper[4813]: E0217 09:06:22.525993 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa33c7d9f800d07084d5ab1b6e22065ad841935c6034af4bfb4805d0c750742f\": container with ID starting with aa33c7d9f800d07084d5ab1b6e22065ad841935c6034af4bfb4805d0c750742f not found: ID does not exist" containerID="aa33c7d9f800d07084d5ab1b6e22065ad841935c6034af4bfb4805d0c750742f" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.526026 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa33c7d9f800d07084d5ab1b6e22065ad841935c6034af4bfb4805d0c750742f"} err="failed to get container status \"aa33c7d9f800d07084d5ab1b6e22065ad841935c6034af4bfb4805d0c750742f\": rpc error: code = NotFound desc = could not find container \"aa33c7d9f800d07084d5ab1b6e22065ad841935c6034af4bfb4805d0c750742f\": container with ID starting with aa33c7d9f800d07084d5ab1b6e22065ad841935c6034af4bfb4805d0c750742f not found: ID does not exist" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.526063 4813 scope.go:117] "RemoveContainer" containerID="8be1f7bc274cb939b995982b62147ee1080339ed74f408b49a34a0a913ec95eb" Feb 17 09:06:22 crc kubenswrapper[4813]: E0217 09:06:22.526488 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be1f7bc274cb939b995982b62147ee1080339ed74f408b49a34a0a913ec95eb\": container with ID starting with 8be1f7bc274cb939b995982b62147ee1080339ed74f408b49a34a0a913ec95eb not found: ID does not exist" containerID="8be1f7bc274cb939b995982b62147ee1080339ed74f408b49a34a0a913ec95eb" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.526520 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be1f7bc274cb939b995982b62147ee1080339ed74f408b49a34a0a913ec95eb"} err="failed to get container status \"8be1f7bc274cb939b995982b62147ee1080339ed74f408b49a34a0a913ec95eb\": rpc error: code = NotFound desc = could not find container \"8be1f7bc274cb939b995982b62147ee1080339ed74f408b49a34a0a913ec95eb\": container with ID starting with 8be1f7bc274cb939b995982b62147ee1080339ed74f408b49a34a0a913ec95eb not found: ID does not exist" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.526532 4813 scope.go:117] "RemoveContainer" containerID="3607ea5e04b2fcda75dc5c54abfc52448d41ebf7ee5f472f179d3237c0192789" Feb 17 09:06:22 crc kubenswrapper[4813]: E0217 09:06:22.526815 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3607ea5e04b2fcda75dc5c54abfc52448d41ebf7ee5f472f179d3237c0192789\": container with ID starting with 3607ea5e04b2fcda75dc5c54abfc52448d41ebf7ee5f472f179d3237c0192789 not found: ID does not exist" containerID="3607ea5e04b2fcda75dc5c54abfc52448d41ebf7ee5f472f179d3237c0192789" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.526856 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3607ea5e04b2fcda75dc5c54abfc52448d41ebf7ee5f472f179d3237c0192789"} err="failed to get container status \"3607ea5e04b2fcda75dc5c54abfc52448d41ebf7ee5f472f179d3237c0192789\": rpc error: code = NotFound desc = could not find container \"3607ea5e04b2fcda75dc5c54abfc52448d41ebf7ee5f472f179d3237c0192789\": container with ID starting with 3607ea5e04b2fcda75dc5c54abfc52448d41ebf7ee5f472f179d3237c0192789 not found: ID does not exist" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.526882 4813 scope.go:117] "RemoveContainer" containerID="4c3f8b1196a80dacc12473167798e50297715ecb8e649f5c60f1090a1fd9e79d" Feb 17 09:06:22 crc kubenswrapper[4813]: E0217 09:06:22.527171 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c3f8b1196a80dacc12473167798e50297715ecb8e649f5c60f1090a1fd9e79d\": container with ID starting with 4c3f8b1196a80dacc12473167798e50297715ecb8e649f5c60f1090a1fd9e79d not found: ID does not exist" containerID="4c3f8b1196a80dacc12473167798e50297715ecb8e649f5c60f1090a1fd9e79d" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.527192 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c3f8b1196a80dacc12473167798e50297715ecb8e649f5c60f1090a1fd9e79d"} err="failed to get container status \"4c3f8b1196a80dacc12473167798e50297715ecb8e649f5c60f1090a1fd9e79d\": rpc error: code = NotFound desc = could not find container \"4c3f8b1196a80dacc12473167798e50297715ecb8e649f5c60f1090a1fd9e79d\": container with ID starting with 4c3f8b1196a80dacc12473167798e50297715ecb8e649f5c60f1090a1fd9e79d not found: ID does not exist" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.627614 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-scripts\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.627659 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjwqd\" (UniqueName: \"kubernetes.io/projected/9f340f06-c740-4d76-9704-981cf40dccc5-kube-api-access-mjwqd\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.627715 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.627863 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f340f06-c740-4d76-9704-981cf40dccc5-log-httpd\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.628035 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-config-data\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.628091 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.628189 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.628266 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f340f06-c740-4d76-9704-981cf40dccc5-run-httpd\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.730398 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.730496 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.730578 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f340f06-c740-4d76-9704-981cf40dccc5-run-httpd\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.730648 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-scripts\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.730683 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjwqd\" (UniqueName: \"kubernetes.io/projected/9f340f06-c740-4d76-9704-981cf40dccc5-kube-api-access-mjwqd\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.730745 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.730813 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f340f06-c740-4d76-9704-981cf40dccc5-log-httpd\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.730887 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-config-data\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.731748 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f340f06-c740-4d76-9704-981cf40dccc5-run-httpd\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.733970 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f340f06-c740-4d76-9704-981cf40dccc5-log-httpd\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.735964 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.736264 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-config-data\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.737283 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.740565 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.743071 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-scripts\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.748425 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjwqd\" (UniqueName: \"kubernetes.io/projected/9f340f06-c740-4d76-9704-981cf40dccc5-kube-api-access-mjwqd\") pod \"ceilometer-0\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:22 crc kubenswrapper[4813]: I0217 09:06:22.804450 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.084044 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s"] Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.093720 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-2dw6s"] Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.128377 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="499f7c94-130a-4d1b-bb86-a42f9a20b869" path="/var/lib/kubelet/pods/499f7c94-130a-4d1b-bb86-a42f9a20b869/volumes" Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.129119 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7cf6603-0290-498b-9746-2e6bf08cd6a6" path="/var/lib/kubelet/pods/c7cf6603-0290-498b-9746-2e6bf08cd6a6/volumes" Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.141413 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher8e65-account-delete-m7lf2"] Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.142623 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8e65-account-delete-m7lf2" Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.153176 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher8e65-account-delete-m7lf2"] Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.167489 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.167697 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="fdc93a4d-1e9a-46a0-9a11-c72e521930c6" containerName="watcher-applier" containerID="cri-o://79da9b2f36d69848565b5febc4540e3804b4a1e93595815639c6a95b509a6b11" gracePeriod=30 Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.215799 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.216039 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="1bccf2cb-4074-4c04-9659-9557d61f79d1" containerName="watcher-kuttl-api-log" containerID="cri-o://3ab28b9eacc73e678d4edb6fe8589d03e5875a666afb2bad3d322b6e6f3376c6" gracePeriod=30 Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.216159 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="1bccf2cb-4074-4c04-9659-9557d61f79d1" containerName="watcher-api" containerID="cri-o://09aaf5b0ca3c41f171753a6d37969366ccfd1424b9ce9c9ed60e38fc59adb565" gracePeriod=30 Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.240773 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.240970 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="1b1a8fbc-6696-4368-a2b4-ac8714a6ba54" containerName="watcher-decision-engine" containerID="cri-o://6bd27d863be9b32f9d36bbcd87e4910ff6275b01dcc7822554444a71abb105e9" gracePeriod=30 Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.342239 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gbmz\" (UniqueName: \"kubernetes.io/projected/033c6943-140c-4dcc-88fb-5de66589dc54-kube-api-access-9gbmz\") pod \"watcher8e65-account-delete-m7lf2\" (UID: \"033c6943-140c-4dcc-88fb-5de66589dc54\") " pod="watcher-kuttl-default/watcher8e65-account-delete-m7lf2" Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.342364 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/033c6943-140c-4dcc-88fb-5de66589dc54-operator-scripts\") pod \"watcher8e65-account-delete-m7lf2\" (UID: \"033c6943-140c-4dcc-88fb-5de66589dc54\") " pod="watcher-kuttl-default/watcher8e65-account-delete-m7lf2" Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.392936 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.409762 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.443803 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gbmz\" (UniqueName: \"kubernetes.io/projected/033c6943-140c-4dcc-88fb-5de66589dc54-kube-api-access-9gbmz\") pod \"watcher8e65-account-delete-m7lf2\" (UID: \"033c6943-140c-4dcc-88fb-5de66589dc54\") " pod="watcher-kuttl-default/watcher8e65-account-delete-m7lf2" Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.443911 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/033c6943-140c-4dcc-88fb-5de66589dc54-operator-scripts\") pod \"watcher8e65-account-delete-m7lf2\" (UID: \"033c6943-140c-4dcc-88fb-5de66589dc54\") " pod="watcher-kuttl-default/watcher8e65-account-delete-m7lf2" Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.444877 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/033c6943-140c-4dcc-88fb-5de66589dc54-operator-scripts\") pod \"watcher8e65-account-delete-m7lf2\" (UID: \"033c6943-140c-4dcc-88fb-5de66589dc54\") " pod="watcher-kuttl-default/watcher8e65-account-delete-m7lf2" Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.474340 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gbmz\" (UniqueName: \"kubernetes.io/projected/033c6943-140c-4dcc-88fb-5de66589dc54-kube-api-access-9gbmz\") pod \"watcher8e65-account-delete-m7lf2\" (UID: \"033c6943-140c-4dcc-88fb-5de66589dc54\") " pod="watcher-kuttl-default/watcher8e65-account-delete-m7lf2" Feb 17 09:06:23 crc kubenswrapper[4813]: I0217 09:06:23.774579 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8e65-account-delete-m7lf2" Feb 17 09:06:24 crc kubenswrapper[4813]: I0217 09:06:24.285150 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher8e65-account-delete-m7lf2"] Feb 17 09:06:24 crc kubenswrapper[4813]: I0217 09:06:24.408164 4813 generic.go:334] "Generic (PLEG): container finished" podID="1bccf2cb-4074-4c04-9659-9557d61f79d1" containerID="3ab28b9eacc73e678d4edb6fe8589d03e5875a666afb2bad3d322b6e6f3376c6" exitCode=143 Feb 17 09:06:24 crc kubenswrapper[4813]: I0217 09:06:24.408276 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1bccf2cb-4074-4c04-9659-9557d61f79d1","Type":"ContainerDied","Data":"3ab28b9eacc73e678d4edb6fe8589d03e5875a666afb2bad3d322b6e6f3376c6"} Feb 17 09:06:24 crc kubenswrapper[4813]: I0217 09:06:24.409684 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher8e65-account-delete-m7lf2" event={"ID":"033c6943-140c-4dcc-88fb-5de66589dc54","Type":"ContainerStarted","Data":"d97ba2313a885b726b4af11be649f16d1e3421d3d845a3583c8adbae1c550e84"} Feb 17 09:06:24 crc kubenswrapper[4813]: I0217 09:06:24.411047 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9f340f06-c740-4d76-9704-981cf40dccc5","Type":"ContainerStarted","Data":"f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518"} Feb 17 09:06:24 crc kubenswrapper[4813]: I0217 09:06:24.411080 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9f340f06-c740-4d76-9704-981cf40dccc5","Type":"ContainerStarted","Data":"2795a4ac006881a66cc27b1a3df1d597505cba0821cac3299f739346798fc81e"} Feb 17 09:06:24 crc kubenswrapper[4813]: I0217 09:06:24.915235 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.074138 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-config-data\") pod \"1bccf2cb-4074-4c04-9659-9557d61f79d1\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.074333 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-custom-prometheus-ca\") pod \"1bccf2cb-4074-4c04-9659-9557d61f79d1\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.074403 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96g66\" (UniqueName: \"kubernetes.io/projected/1bccf2cb-4074-4c04-9659-9557d61f79d1-kube-api-access-96g66\") pod \"1bccf2cb-4074-4c04-9659-9557d61f79d1\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.074489 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bccf2cb-4074-4c04-9659-9557d61f79d1-logs\") pod \"1bccf2cb-4074-4c04-9659-9557d61f79d1\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.074544 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-combined-ca-bundle\") pod \"1bccf2cb-4074-4c04-9659-9557d61f79d1\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.074584 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-cert-memcached-mtls\") pod \"1bccf2cb-4074-4c04-9659-9557d61f79d1\" (UID: \"1bccf2cb-4074-4c04-9659-9557d61f79d1\") " Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.074966 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bccf2cb-4074-4c04-9659-9557d61f79d1-logs" (OuterVolumeSpecName: "logs") pod "1bccf2cb-4074-4c04-9659-9557d61f79d1" (UID: "1bccf2cb-4074-4c04-9659-9557d61f79d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.075273 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bccf2cb-4074-4c04-9659-9557d61f79d1-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.081415 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bccf2cb-4074-4c04-9659-9557d61f79d1-kube-api-access-96g66" (OuterVolumeSpecName: "kube-api-access-96g66") pod "1bccf2cb-4074-4c04-9659-9557d61f79d1" (UID: "1bccf2cb-4074-4c04-9659-9557d61f79d1"). InnerVolumeSpecName "kube-api-access-96g66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.111735 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bccf2cb-4074-4c04-9659-9557d61f79d1" (UID: "1bccf2cb-4074-4c04-9659-9557d61f79d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.124384 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "1bccf2cb-4074-4c04-9659-9557d61f79d1" (UID: "1bccf2cb-4074-4c04-9659-9557d61f79d1"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.172528 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-config-data" (OuterVolumeSpecName: "config-data") pod "1bccf2cb-4074-4c04-9659-9557d61f79d1" (UID: "1bccf2cb-4074-4c04-9659-9557d61f79d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.177539 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.177590 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.177604 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.177617 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96g66\" (UniqueName: \"kubernetes.io/projected/1bccf2cb-4074-4c04-9659-9557d61f79d1-kube-api-access-96g66\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.196591 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "1bccf2cb-4074-4c04-9659-9557d61f79d1" (UID: "1bccf2cb-4074-4c04-9659-9557d61f79d1"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.281568 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1bccf2cb-4074-4c04-9659-9557d61f79d1-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.365392 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.422049 4813 generic.go:334] "Generic (PLEG): container finished" podID="1bccf2cb-4074-4c04-9659-9557d61f79d1" containerID="09aaf5b0ca3c41f171753a6d37969366ccfd1424b9ce9c9ed60e38fc59adb565" exitCode=0 Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.422123 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.422152 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1bccf2cb-4074-4c04-9659-9557d61f79d1","Type":"ContainerDied","Data":"09aaf5b0ca3c41f171753a6d37969366ccfd1424b9ce9c9ed60e38fc59adb565"} Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.422183 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1bccf2cb-4074-4c04-9659-9557d61f79d1","Type":"ContainerDied","Data":"919043836a1f151b8dae1666a1af54d1048f1790efa6ce19407ab4a57151a452"} Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.422203 4813 scope.go:117] "RemoveContainer" containerID="09aaf5b0ca3c41f171753a6d37969366ccfd1424b9ce9c9ed60e38fc59adb565" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.424395 4813 generic.go:334] "Generic (PLEG): container finished" podID="1b1a8fbc-6696-4368-a2b4-ac8714a6ba54" containerID="6bd27d863be9b32f9d36bbcd87e4910ff6275b01dcc7822554444a71abb105e9" exitCode=0 Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.424444 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54","Type":"ContainerDied","Data":"6bd27d863be9b32f9d36bbcd87e4910ff6275b01dcc7822554444a71abb105e9"} Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.424464 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54","Type":"ContainerDied","Data":"ec185479d44ccc1a7ce331b8c2d97ba946a04d296ed71557a4dd52b516983750"} Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.424524 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.427227 4813 generic.go:334] "Generic (PLEG): container finished" podID="033c6943-140c-4dcc-88fb-5de66589dc54" containerID="749ff068a2a768a739001284bf93466a5b9a4051fd5a6b597824b36299d815e3" exitCode=0 Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.427322 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher8e65-account-delete-m7lf2" event={"ID":"033c6943-140c-4dcc-88fb-5de66589dc54","Type":"ContainerDied","Data":"749ff068a2a768a739001284bf93466a5b9a4051fd5a6b597824b36299d815e3"} Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.429752 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9f340f06-c740-4d76-9704-981cf40dccc5","Type":"ContainerStarted","Data":"b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f"} Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.429805 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9f340f06-c740-4d76-9704-981cf40dccc5","Type":"ContainerStarted","Data":"ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7"} Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.474204 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.474240 4813 scope.go:117] "RemoveContainer" containerID="3ab28b9eacc73e678d4edb6fe8589d03e5875a666afb2bad3d322b6e6f3376c6" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.483571 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.484118 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-combined-ca-bundle\") pod \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.484179 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-custom-prometheus-ca\") pod \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.484236 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-config-data\") pod \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.484368 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-logs\") pod \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.484400 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-cert-memcached-mtls\") pod \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.484452 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92g2j\" (UniqueName: \"kubernetes.io/projected/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-kube-api-access-92g2j\") pod \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\" (UID: \"1b1a8fbc-6696-4368-a2b4-ac8714a6ba54\") " Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.488452 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-logs" (OuterVolumeSpecName: "logs") pod "1b1a8fbc-6696-4368-a2b4-ac8714a6ba54" (UID: "1b1a8fbc-6696-4368-a2b4-ac8714a6ba54"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.490705 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-kube-api-access-92g2j" (OuterVolumeSpecName: "kube-api-access-92g2j") pod "1b1a8fbc-6696-4368-a2b4-ac8714a6ba54" (UID: "1b1a8fbc-6696-4368-a2b4-ac8714a6ba54"). InnerVolumeSpecName "kube-api-access-92g2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.498925 4813 scope.go:117] "RemoveContainer" containerID="09aaf5b0ca3c41f171753a6d37969366ccfd1424b9ce9c9ed60e38fc59adb565" Feb 17 09:06:25 crc kubenswrapper[4813]: E0217 09:06:25.499964 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09aaf5b0ca3c41f171753a6d37969366ccfd1424b9ce9c9ed60e38fc59adb565\": container with ID starting with 09aaf5b0ca3c41f171753a6d37969366ccfd1424b9ce9c9ed60e38fc59adb565 not found: ID does not exist" containerID="09aaf5b0ca3c41f171753a6d37969366ccfd1424b9ce9c9ed60e38fc59adb565" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.499992 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09aaf5b0ca3c41f171753a6d37969366ccfd1424b9ce9c9ed60e38fc59adb565"} err="failed to get container status \"09aaf5b0ca3c41f171753a6d37969366ccfd1424b9ce9c9ed60e38fc59adb565\": rpc error: code = NotFound desc = could not find container \"09aaf5b0ca3c41f171753a6d37969366ccfd1424b9ce9c9ed60e38fc59adb565\": container with ID starting with 09aaf5b0ca3c41f171753a6d37969366ccfd1424b9ce9c9ed60e38fc59adb565 not found: ID does not exist" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.500011 4813 scope.go:117] "RemoveContainer" containerID="3ab28b9eacc73e678d4edb6fe8589d03e5875a666afb2bad3d322b6e6f3376c6" Feb 17 09:06:25 crc kubenswrapper[4813]: E0217 09:06:25.500736 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ab28b9eacc73e678d4edb6fe8589d03e5875a666afb2bad3d322b6e6f3376c6\": container with ID starting with 3ab28b9eacc73e678d4edb6fe8589d03e5875a666afb2bad3d322b6e6f3376c6 not found: ID does not exist" containerID="3ab28b9eacc73e678d4edb6fe8589d03e5875a666afb2bad3d322b6e6f3376c6" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.500791 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ab28b9eacc73e678d4edb6fe8589d03e5875a666afb2bad3d322b6e6f3376c6"} err="failed to get container status \"3ab28b9eacc73e678d4edb6fe8589d03e5875a666afb2bad3d322b6e6f3376c6\": rpc error: code = NotFound desc = could not find container \"3ab28b9eacc73e678d4edb6fe8589d03e5875a666afb2bad3d322b6e6f3376c6\": container with ID starting with 3ab28b9eacc73e678d4edb6fe8589d03e5875a666afb2bad3d322b6e6f3376c6 not found: ID does not exist" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.500835 4813 scope.go:117] "RemoveContainer" containerID="6bd27d863be9b32f9d36bbcd87e4910ff6275b01dcc7822554444a71abb105e9" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.515601 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "1b1a8fbc-6696-4368-a2b4-ac8714a6ba54" (UID: "1b1a8fbc-6696-4368-a2b4-ac8714a6ba54"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.529730 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b1a8fbc-6696-4368-a2b4-ac8714a6ba54" (UID: "1b1a8fbc-6696-4368-a2b4-ac8714a6ba54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.555348 4813 scope.go:117] "RemoveContainer" containerID="6bd27d863be9b32f9d36bbcd87e4910ff6275b01dcc7822554444a71abb105e9" Feb 17 09:06:25 crc kubenswrapper[4813]: E0217 09:06:25.555772 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bd27d863be9b32f9d36bbcd87e4910ff6275b01dcc7822554444a71abb105e9\": container with ID starting with 6bd27d863be9b32f9d36bbcd87e4910ff6275b01dcc7822554444a71abb105e9 not found: ID does not exist" containerID="6bd27d863be9b32f9d36bbcd87e4910ff6275b01dcc7822554444a71abb105e9" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.555801 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bd27d863be9b32f9d36bbcd87e4910ff6275b01dcc7822554444a71abb105e9"} err="failed to get container status \"6bd27d863be9b32f9d36bbcd87e4910ff6275b01dcc7822554444a71abb105e9\": rpc error: code = NotFound desc = could not find container \"6bd27d863be9b32f9d36bbcd87e4910ff6275b01dcc7822554444a71abb105e9\": container with ID starting with 6bd27d863be9b32f9d36bbcd87e4910ff6275b01dcc7822554444a71abb105e9 not found: ID does not exist" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.560809 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "1b1a8fbc-6696-4368-a2b4-ac8714a6ba54" (UID: "1b1a8fbc-6696-4368-a2b4-ac8714a6ba54"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.575325 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-config-data" (OuterVolumeSpecName: "config-data") pod "1b1a8fbc-6696-4368-a2b4-ac8714a6ba54" (UID: "1b1a8fbc-6696-4368-a2b4-ac8714a6ba54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.589420 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.589453 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.589463 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.589472 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.589480 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.589488 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92g2j\" (UniqueName: \"kubernetes.io/projected/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54-kube-api-access-92g2j\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.758282 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:06:25 crc kubenswrapper[4813]: I0217 09:06:25.764906 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.407912 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.447544 4813 generic.go:334] "Generic (PLEG): container finished" podID="fdc93a4d-1e9a-46a0-9a11-c72e521930c6" containerID="79da9b2f36d69848565b5febc4540e3804b4a1e93595815639c6a95b509a6b11" exitCode=0 Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.447855 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"fdc93a4d-1e9a-46a0-9a11-c72e521930c6","Type":"ContainerDied","Data":"79da9b2f36d69848565b5febc4540e3804b4a1e93595815639c6a95b509a6b11"} Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.447879 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"fdc93a4d-1e9a-46a0-9a11-c72e521930c6","Type":"ContainerDied","Data":"d53fc02fa86a30c413ba6ac2c76cde25bce935c70aeaf74bc4182766a1885b6a"} Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.447895 4813 scope.go:117] "RemoveContainer" containerID="79da9b2f36d69848565b5febc4540e3804b4a1e93595815639c6a95b509a6b11" Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.448161 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.478494 4813 scope.go:117] "RemoveContainer" containerID="79da9b2f36d69848565b5febc4540e3804b4a1e93595815639c6a95b509a6b11" Feb 17 09:06:26 crc kubenswrapper[4813]: E0217 09:06:26.488004 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79da9b2f36d69848565b5febc4540e3804b4a1e93595815639c6a95b509a6b11\": container with ID starting with 79da9b2f36d69848565b5febc4540e3804b4a1e93595815639c6a95b509a6b11 not found: ID does not exist" containerID="79da9b2f36d69848565b5febc4540e3804b4a1e93595815639c6a95b509a6b11" Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.488049 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79da9b2f36d69848565b5febc4540e3804b4a1e93595815639c6a95b509a6b11"} err="failed to get container status \"79da9b2f36d69848565b5febc4540e3804b4a1e93595815639c6a95b509a6b11\": rpc error: code = NotFound desc = could not find container \"79da9b2f36d69848565b5febc4540e3804b4a1e93595815639c6a95b509a6b11\": container with ID starting with 79da9b2f36d69848565b5febc4540e3804b4a1e93595815639c6a95b509a6b11 not found: ID does not exist" Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.501737 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-config-data\") pod \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.501797 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-logs\") pod \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.501826 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-cert-memcached-mtls\") pod \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.501986 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dngzj\" (UniqueName: \"kubernetes.io/projected/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-kube-api-access-dngzj\") pod \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.502028 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-combined-ca-bundle\") pod \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\" (UID: \"fdc93a4d-1e9a-46a0-9a11-c72e521930c6\") " Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.502621 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-logs" (OuterVolumeSpecName: "logs") pod "fdc93a4d-1e9a-46a0-9a11-c72e521930c6" (UID: "fdc93a4d-1e9a-46a0-9a11-c72e521930c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.538489 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-kube-api-access-dngzj" (OuterVolumeSpecName: "kube-api-access-dngzj") pod "fdc93a4d-1e9a-46a0-9a11-c72e521930c6" (UID: "fdc93a4d-1e9a-46a0-9a11-c72e521930c6"). InnerVolumeSpecName "kube-api-access-dngzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.580599 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdc93a4d-1e9a-46a0-9a11-c72e521930c6" (UID: "fdc93a4d-1e9a-46a0-9a11-c72e521930c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.581489 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-config-data" (OuterVolumeSpecName: "config-data") pod "fdc93a4d-1e9a-46a0-9a11-c72e521930c6" (UID: "fdc93a4d-1e9a-46a0-9a11-c72e521930c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.604238 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.604270 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.604283 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.604292 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dngzj\" (UniqueName: \"kubernetes.io/projected/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-kube-api-access-dngzj\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.750544 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "fdc93a4d-1e9a-46a0-9a11-c72e521930c6" (UID: "fdc93a4d-1e9a-46a0-9a11-c72e521930c6"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.806909 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fdc93a4d-1e9a-46a0-9a11-c72e521930c6-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.833965 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:26 crc kubenswrapper[4813]: I0217 09:06:26.946571 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8e65-account-delete-m7lf2" Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.093374 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.100884 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.110799 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/033c6943-140c-4dcc-88fb-5de66589dc54-operator-scripts\") pod \"033c6943-140c-4dcc-88fb-5de66589dc54\" (UID: \"033c6943-140c-4dcc-88fb-5de66589dc54\") " Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.110839 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gbmz\" (UniqueName: \"kubernetes.io/projected/033c6943-140c-4dcc-88fb-5de66589dc54-kube-api-access-9gbmz\") pod \"033c6943-140c-4dcc-88fb-5de66589dc54\" (UID: \"033c6943-140c-4dcc-88fb-5de66589dc54\") " Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.111876 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/033c6943-140c-4dcc-88fb-5de66589dc54-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "033c6943-140c-4dcc-88fb-5de66589dc54" (UID: "033c6943-140c-4dcc-88fb-5de66589dc54"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.115053 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/033c6943-140c-4dcc-88fb-5de66589dc54-kube-api-access-9gbmz" (OuterVolumeSpecName: "kube-api-access-9gbmz") pod "033c6943-140c-4dcc-88fb-5de66589dc54" (UID: "033c6943-140c-4dcc-88fb-5de66589dc54"). InnerVolumeSpecName "kube-api-access-9gbmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.120916 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b1a8fbc-6696-4368-a2b4-ac8714a6ba54" path="/var/lib/kubelet/pods/1b1a8fbc-6696-4368-a2b4-ac8714a6ba54/volumes" Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.123800 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bccf2cb-4074-4c04-9659-9557d61f79d1" path="/var/lib/kubelet/pods/1bccf2cb-4074-4c04-9659-9557d61f79d1/volumes" Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.124626 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdc93a4d-1e9a-46a0-9a11-c72e521930c6" path="/var/lib/kubelet/pods/fdc93a4d-1e9a-46a0-9a11-c72e521930c6/volumes" Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.212621 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/033c6943-140c-4dcc-88fb-5de66589dc54-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.212647 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gbmz\" (UniqueName: \"kubernetes.io/projected/033c6943-140c-4dcc-88fb-5de66589dc54-kube-api-access-9gbmz\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.460911 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher8e65-account-delete-m7lf2" event={"ID":"033c6943-140c-4dcc-88fb-5de66589dc54","Type":"ContainerDied","Data":"d97ba2313a885b726b4af11be649f16d1e3421d3d845a3583c8adbae1c550e84"} Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.460935 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8e65-account-delete-m7lf2" Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.460947 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d97ba2313a885b726b4af11be649f16d1e3421d3d845a3583c8adbae1c550e84" Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.465485 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9f340f06-c740-4d76-9704-981cf40dccc5","Type":"ContainerStarted","Data":"a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3"} Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.465738 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:27 crc kubenswrapper[4813]: I0217 09:06:27.486145 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.547480007 podStartE2EDuration="5.48613103s" podCreationTimestamp="2026-02-17 09:06:22 +0000 UTC" firstStartedPulling="2026-02-17 09:06:23.409554196 +0000 UTC m=+1531.070315419" lastFinishedPulling="2026-02-17 09:06:26.348205219 +0000 UTC m=+1534.008966442" observedRunningTime="2026-02-17 09:06:27.485358178 +0000 UTC m=+1535.146119401" watchObservedRunningTime="2026-02-17 09:06:27.48613103 +0000 UTC m=+1535.146892253" Feb 17 09:06:28 crc kubenswrapper[4813]: I0217 09:06:28.164218 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-zcm7m"] Feb 17 09:06:28 crc kubenswrapper[4813]: I0217 09:06:28.178471 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-zcm7m"] Feb 17 09:06:28 crc kubenswrapper[4813]: I0217 09:06:28.188922 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher8e65-account-delete-m7lf2"] Feb 17 09:06:28 crc kubenswrapper[4813]: I0217 09:06:28.196347 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher8e65-account-delete-m7lf2"] Feb 17 09:06:28 crc kubenswrapper[4813]: I0217 09:06:28.203509 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-8e65-account-create-update-8btw6"] Feb 17 09:06:28 crc kubenswrapper[4813]: I0217 09:06:28.210440 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-8e65-account-create-update-8btw6"] Feb 17 09:06:28 crc kubenswrapper[4813]: I0217 09:06:28.475950 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" containerName="ceilometer-central-agent" containerID="cri-o://f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518" gracePeriod=30 Feb 17 09:06:28 crc kubenswrapper[4813]: I0217 09:06:28.475996 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" containerName="sg-core" containerID="cri-o://b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f" gracePeriod=30 Feb 17 09:06:28 crc kubenswrapper[4813]: I0217 09:06:28.476003 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" containerName="ceilometer-notification-agent" containerID="cri-o://ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7" gracePeriod=30 Feb 17 09:06:28 crc kubenswrapper[4813]: I0217 09:06:28.476051 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" containerName="proxy-httpd" containerID="cri-o://a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3" gracePeriod=30 Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.129757 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="033c6943-140c-4dcc-88fb-5de66589dc54" path="/var/lib/kubelet/pods/033c6943-140c-4dcc-88fb-5de66589dc54/volumes" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.130252 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3637c9fb-4fbe-4b09-89ca-e6b78e90fa03" path="/var/lib/kubelet/pods/3637c9fb-4fbe-4b09-89ca-e6b78e90fa03/volumes" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.130745 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df1c85a8-48d9-4697-810a-07f11e4fd052" path="/var/lib/kubelet/pods/df1c85a8-48d9-4697-810a-07f11e4fd052/volumes" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.334771 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.444371 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-wltvx"] Feb 17 09:06:29 crc kubenswrapper[4813]: E0217 09:06:29.444671 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b1a8fbc-6696-4368-a2b4-ac8714a6ba54" containerName="watcher-decision-engine" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.444685 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b1a8fbc-6696-4368-a2b4-ac8714a6ba54" containerName="watcher-decision-engine" Feb 17 09:06:29 crc kubenswrapper[4813]: E0217 09:06:29.444697 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc93a4d-1e9a-46a0-9a11-c72e521930c6" containerName="watcher-applier" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.444703 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc93a4d-1e9a-46a0-9a11-c72e521930c6" containerName="watcher-applier" Feb 17 09:06:29 crc kubenswrapper[4813]: E0217 09:06:29.444714 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bccf2cb-4074-4c04-9659-9557d61f79d1" containerName="watcher-api" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.444720 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bccf2cb-4074-4c04-9659-9557d61f79d1" containerName="watcher-api" Feb 17 09:06:29 crc kubenswrapper[4813]: E0217 09:06:29.444728 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" containerName="sg-core" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.444733 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" containerName="sg-core" Feb 17 09:06:29 crc kubenswrapper[4813]: E0217 09:06:29.444745 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="033c6943-140c-4dcc-88fb-5de66589dc54" containerName="mariadb-account-delete" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.444751 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="033c6943-140c-4dcc-88fb-5de66589dc54" containerName="mariadb-account-delete" Feb 17 09:06:29 crc kubenswrapper[4813]: E0217 09:06:29.444762 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bccf2cb-4074-4c04-9659-9557d61f79d1" containerName="watcher-kuttl-api-log" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.444768 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bccf2cb-4074-4c04-9659-9557d61f79d1" containerName="watcher-kuttl-api-log" Feb 17 09:06:29 crc kubenswrapper[4813]: E0217 09:06:29.444777 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" containerName="ceilometer-notification-agent" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.444783 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" containerName="ceilometer-notification-agent" Feb 17 09:06:29 crc kubenswrapper[4813]: E0217 09:06:29.444794 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" containerName="proxy-httpd" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.444800 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" containerName="proxy-httpd" Feb 17 09:06:29 crc kubenswrapper[4813]: E0217 09:06:29.444815 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" containerName="ceilometer-central-agent" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.444820 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" containerName="ceilometer-central-agent" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.444944 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b1a8fbc-6696-4368-a2b4-ac8714a6ba54" containerName="watcher-decision-engine" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.444957 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc93a4d-1e9a-46a0-9a11-c72e521930c6" containerName="watcher-applier" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.444966 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bccf2cb-4074-4c04-9659-9557d61f79d1" containerName="watcher-kuttl-api-log" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.444975 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bccf2cb-4074-4c04-9659-9557d61f79d1" containerName="watcher-api" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.444983 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" containerName="sg-core" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.444992 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="033c6943-140c-4dcc-88fb-5de66589dc54" containerName="mariadb-account-delete" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.445002 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" containerName="ceilometer-central-agent" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.445012 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" containerName="ceilometer-notification-agent" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.445020 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" containerName="proxy-httpd" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.445504 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-wltvx" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.451939 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f340f06-c740-4d76-9704-981cf40dccc5-log-httpd\") pod \"9f340f06-c740-4d76-9704-981cf40dccc5\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.451991 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-combined-ca-bundle\") pod \"9f340f06-c740-4d76-9704-981cf40dccc5\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.452023 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-config-data\") pod \"9f340f06-c740-4d76-9704-981cf40dccc5\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.452047 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwqd\" (UniqueName: \"kubernetes.io/projected/9f340f06-c740-4d76-9704-981cf40dccc5-kube-api-access-mjwqd\") pod \"9f340f06-c740-4d76-9704-981cf40dccc5\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.452077 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-scripts\") pod \"9f340f06-c740-4d76-9704-981cf40dccc5\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.452104 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-sg-core-conf-yaml\") pod \"9f340f06-c740-4d76-9704-981cf40dccc5\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.452123 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f340f06-c740-4d76-9704-981cf40dccc5-run-httpd\") pod \"9f340f06-c740-4d76-9704-981cf40dccc5\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.452245 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-ceilometer-tls-certs\") pod \"9f340f06-c740-4d76-9704-981cf40dccc5\" (UID: \"9f340f06-c740-4d76-9704-981cf40dccc5\") " Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.452497 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f340f06-c740-4d76-9704-981cf40dccc5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9f340f06-c740-4d76-9704-981cf40dccc5" (UID: "9f340f06-c740-4d76-9704-981cf40dccc5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.452704 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f340f06-c740-4d76-9704-981cf40dccc5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.456931 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f340f06-c740-4d76-9704-981cf40dccc5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9f340f06-c740-4d76-9704-981cf40dccc5" (UID: "9f340f06-c740-4d76-9704-981cf40dccc5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.457001 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-scripts" (OuterVolumeSpecName: "scripts") pod "9f340f06-c740-4d76-9704-981cf40dccc5" (UID: "9f340f06-c740-4d76-9704-981cf40dccc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.460176 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f340f06-c740-4d76-9704-981cf40dccc5-kube-api-access-mjwqd" (OuterVolumeSpecName: "kube-api-access-mjwqd") pod "9f340f06-c740-4d76-9704-981cf40dccc5" (UID: "9f340f06-c740-4d76-9704-981cf40dccc5"). InnerVolumeSpecName "kube-api-access-mjwqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.460549 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-wltvx"] Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.488675 4813 generic.go:334] "Generic (PLEG): container finished" podID="9f340f06-c740-4d76-9704-981cf40dccc5" containerID="a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3" exitCode=0 Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.489794 4813 generic.go:334] "Generic (PLEG): container finished" podID="9f340f06-c740-4d76-9704-981cf40dccc5" containerID="b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f" exitCode=2 Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.489904 4813 generic.go:334] "Generic (PLEG): container finished" podID="9f340f06-c740-4d76-9704-981cf40dccc5" containerID="ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7" exitCode=0 Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.490005 4813 generic.go:334] "Generic (PLEG): container finished" podID="9f340f06-c740-4d76-9704-981cf40dccc5" containerID="f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518" exitCode=0 Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.488733 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.488747 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9f340f06-c740-4d76-9704-981cf40dccc5","Type":"ContainerDied","Data":"a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3"} Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.490947 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9f340f06-c740-4d76-9704-981cf40dccc5","Type":"ContainerDied","Data":"b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f"} Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.491039 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9f340f06-c740-4d76-9704-981cf40dccc5","Type":"ContainerDied","Data":"ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7"} Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.491107 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9f340f06-c740-4d76-9704-981cf40dccc5","Type":"ContainerDied","Data":"f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518"} Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.491179 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9f340f06-c740-4d76-9704-981cf40dccc5","Type":"ContainerDied","Data":"2795a4ac006881a66cc27b1a3df1d597505cba0821cac3299f739346798fc81e"} Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.491280 4813 scope.go:117] "RemoveContainer" containerID="a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.496970 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9f340f06-c740-4d76-9704-981cf40dccc5" (UID: "9f340f06-c740-4d76-9704-981cf40dccc5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.506822 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9f340f06-c740-4d76-9704-981cf40dccc5" (UID: "9f340f06-c740-4d76-9704-981cf40dccc5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.520717 4813 scope.go:117] "RemoveContainer" containerID="b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.554929 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwtv2\" (UniqueName: \"kubernetes.io/projected/e2010c77-96c8-40e2-8f5d-0501c6b6ae92-kube-api-access-qwtv2\") pod \"watcher-db-create-wltvx\" (UID: \"e2010c77-96c8-40e2-8f5d-0501c6b6ae92\") " pod="watcher-kuttl-default/watcher-db-create-wltvx" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.556205 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2010c77-96c8-40e2-8f5d-0501c6b6ae92-operator-scripts\") pod \"watcher-db-create-wltvx\" (UID: \"e2010c77-96c8-40e2-8f5d-0501c6b6ae92\") " pod="watcher-kuttl-default/watcher-db-create-wltvx" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.557900 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.557921 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjwqd\" (UniqueName: \"kubernetes.io/projected/9f340f06-c740-4d76-9704-981cf40dccc5-kube-api-access-mjwqd\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.557933 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.557942 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.557952 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f340f06-c740-4d76-9704-981cf40dccc5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.565854 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj"] Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.567132 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.568882 4813 scope.go:117] "RemoveContainer" containerID="ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.569047 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-config-data" (OuterVolumeSpecName: "config-data") pod "9f340f06-c740-4d76-9704-981cf40dccc5" (UID: "9f340f06-c740-4d76-9704-981cf40dccc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.569203 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.573925 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj"] Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.602926 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f340f06-c740-4d76-9704-981cf40dccc5" (UID: "9f340f06-c740-4d76-9704-981cf40dccc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.602978 4813 scope.go:117] "RemoveContainer" containerID="f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.629637 4813 scope.go:117] "RemoveContainer" containerID="a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3" Feb 17 09:06:29 crc kubenswrapper[4813]: E0217 09:06:29.630106 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3\": container with ID starting with a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3 not found: ID does not exist" containerID="a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.630137 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3"} err="failed to get container status \"a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3\": rpc error: code = NotFound desc = could not find container \"a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3\": container with ID starting with a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3 not found: ID does not exist" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.630156 4813 scope.go:117] "RemoveContainer" containerID="b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f" Feb 17 09:06:29 crc kubenswrapper[4813]: E0217 09:06:29.630462 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f\": container with ID starting with b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f not found: ID does not exist" containerID="b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.630513 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f"} err="failed to get container status \"b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f\": rpc error: code = NotFound desc = could not find container \"b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f\": container with ID starting with b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f not found: ID does not exist" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.630544 4813 scope.go:117] "RemoveContainer" containerID="ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7" Feb 17 09:06:29 crc kubenswrapper[4813]: E0217 09:06:29.630889 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7\": container with ID starting with ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7 not found: ID does not exist" containerID="ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.630935 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7"} err="failed to get container status \"ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7\": rpc error: code = NotFound desc = could not find container \"ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7\": container with ID starting with ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7 not found: ID does not exist" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.630964 4813 scope.go:117] "RemoveContainer" containerID="f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518" Feb 17 09:06:29 crc kubenswrapper[4813]: E0217 09:06:29.631196 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518\": container with ID starting with f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518 not found: ID does not exist" containerID="f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.631224 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518"} err="failed to get container status \"f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518\": rpc error: code = NotFound desc = could not find container \"f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518\": container with ID starting with f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518 not found: ID does not exist" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.631238 4813 scope.go:117] "RemoveContainer" containerID="a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.631417 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3"} err="failed to get container status \"a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3\": rpc error: code = NotFound desc = could not find container \"a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3\": container with ID starting with a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3 not found: ID does not exist" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.631438 4813 scope.go:117] "RemoveContainer" containerID="b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.631610 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f"} err="failed to get container status \"b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f\": rpc error: code = NotFound desc = could not find container \"b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f\": container with ID starting with b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f not found: ID does not exist" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.631629 4813 scope.go:117] "RemoveContainer" containerID="ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.631788 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7"} err="failed to get container status \"ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7\": rpc error: code = NotFound desc = could not find container \"ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7\": container with ID starting with ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7 not found: ID does not exist" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.631810 4813 scope.go:117] "RemoveContainer" containerID="f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.631982 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518"} err="failed to get container status \"f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518\": rpc error: code = NotFound desc = could not find container \"f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518\": container with ID starting with f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518 not found: ID does not exist" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.632002 4813 scope.go:117] "RemoveContainer" containerID="a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.632185 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3"} err="failed to get container status \"a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3\": rpc error: code = NotFound desc = could not find container \"a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3\": container with ID starting with a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3 not found: ID does not exist" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.632217 4813 scope.go:117] "RemoveContainer" containerID="b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.632406 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f"} err="failed to get container status \"b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f\": rpc error: code = NotFound desc = could not find container \"b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f\": container with ID starting with b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f not found: ID does not exist" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.632423 4813 scope.go:117] "RemoveContainer" containerID="ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.632592 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7"} err="failed to get container status \"ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7\": rpc error: code = NotFound desc = could not find container \"ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7\": container with ID starting with ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7 not found: ID does not exist" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.632614 4813 scope.go:117] "RemoveContainer" containerID="f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.632844 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518"} err="failed to get container status \"f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518\": rpc error: code = NotFound desc = could not find container \"f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518\": container with ID starting with f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518 not found: ID does not exist" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.632865 4813 scope.go:117] "RemoveContainer" containerID="a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.633045 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3"} err="failed to get container status \"a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3\": rpc error: code = NotFound desc = could not find container \"a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3\": container with ID starting with a2567aac2a0f91d53bb7587f5e675f6c5efb4e912e28a7150934d107dce79ae3 not found: ID does not exist" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.633067 4813 scope.go:117] "RemoveContainer" containerID="b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.633296 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f"} err="failed to get container status \"b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f\": rpc error: code = NotFound desc = could not find container \"b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f\": container with ID starting with b040159577db8365989ba635808e81b9959e7c2c1f70283a86ad2021fa826f6f not found: ID does not exist" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.633339 4813 scope.go:117] "RemoveContainer" containerID="ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.633541 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7"} err="failed to get container status \"ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7\": rpc error: code = NotFound desc = could not find container \"ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7\": container with ID starting with ac9a7bc4f4d0fe881886ec4bfd8c854c3d71a5f92c9fbfc597e48cb592725ab7 not found: ID does not exist" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.633560 4813 scope.go:117] "RemoveContainer" containerID="f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.633698 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518"} err="failed to get container status \"f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518\": rpc error: code = NotFound desc = could not find container \"f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518\": container with ID starting with f2aa95c79de6f2853303d53c464e3ceb299b371f1aeff8acc8a1e47f32105518 not found: ID does not exist" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.658795 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e3b418-2e0b-433a-8dc9-9981dae6bd9e-operator-scripts\") pod \"watcher-2cf2-account-create-update-c8xqj\" (UID: \"24e3b418-2e0b-433a-8dc9-9981dae6bd9e\") " pod="watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.658869 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t8td\" (UniqueName: \"kubernetes.io/projected/24e3b418-2e0b-433a-8dc9-9981dae6bd9e-kube-api-access-5t8td\") pod \"watcher-2cf2-account-create-update-c8xqj\" (UID: \"24e3b418-2e0b-433a-8dc9-9981dae6bd9e\") " pod="watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.658898 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwtv2\" (UniqueName: \"kubernetes.io/projected/e2010c77-96c8-40e2-8f5d-0501c6b6ae92-kube-api-access-qwtv2\") pod \"watcher-db-create-wltvx\" (UID: \"e2010c77-96c8-40e2-8f5d-0501c6b6ae92\") " pod="watcher-kuttl-default/watcher-db-create-wltvx" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.658915 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2010c77-96c8-40e2-8f5d-0501c6b6ae92-operator-scripts\") pod \"watcher-db-create-wltvx\" (UID: \"e2010c77-96c8-40e2-8f5d-0501c6b6ae92\") " pod="watcher-kuttl-default/watcher-db-create-wltvx" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.658978 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.658990 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f340f06-c740-4d76-9704-981cf40dccc5-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.659683 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2010c77-96c8-40e2-8f5d-0501c6b6ae92-operator-scripts\") pod \"watcher-db-create-wltvx\" (UID: \"e2010c77-96c8-40e2-8f5d-0501c6b6ae92\") " pod="watcher-kuttl-default/watcher-db-create-wltvx" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.687046 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwtv2\" (UniqueName: \"kubernetes.io/projected/e2010c77-96c8-40e2-8f5d-0501c6b6ae92-kube-api-access-qwtv2\") pod \"watcher-db-create-wltvx\" (UID: \"e2010c77-96c8-40e2-8f5d-0501c6b6ae92\") " pod="watcher-kuttl-default/watcher-db-create-wltvx" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.759762 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e3b418-2e0b-433a-8dc9-9981dae6bd9e-operator-scripts\") pod \"watcher-2cf2-account-create-update-c8xqj\" (UID: \"24e3b418-2e0b-433a-8dc9-9981dae6bd9e\") " pod="watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.759871 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t8td\" (UniqueName: \"kubernetes.io/projected/24e3b418-2e0b-433a-8dc9-9981dae6bd9e-kube-api-access-5t8td\") pod \"watcher-2cf2-account-create-update-c8xqj\" (UID: \"24e3b418-2e0b-433a-8dc9-9981dae6bd9e\") " pod="watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.760823 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e3b418-2e0b-433a-8dc9-9981dae6bd9e-operator-scripts\") pod \"watcher-2cf2-account-create-update-c8xqj\" (UID: \"24e3b418-2e0b-433a-8dc9-9981dae6bd9e\") " pod="watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.777825 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t8td\" (UniqueName: \"kubernetes.io/projected/24e3b418-2e0b-433a-8dc9-9981dae6bd9e-kube-api-access-5t8td\") pod \"watcher-2cf2-account-create-update-c8xqj\" (UID: \"24e3b418-2e0b-433a-8dc9-9981dae6bd9e\") " pod="watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.818766 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-wltvx" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.892945 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.912746 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.943959 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.972910 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.980821 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.983712 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.983855 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.983991 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:06:29 crc kubenswrapper[4813]: I0217 09:06:29.990063 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.065695 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-log-httpd\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.065958 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.066009 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-run-httpd\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.066025 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.066048 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z677q\" (UniqueName: \"kubernetes.io/projected/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-kube-api-access-z677q\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.066067 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-scripts\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.066103 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.066120 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-config-data\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.167918 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-run-httpd\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.167961 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.167988 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z677q\" (UniqueName: \"kubernetes.io/projected/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-kube-api-access-z677q\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.168010 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-scripts\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.168059 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.168076 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-config-data\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.168128 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-log-httpd\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.168147 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.169245 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-run-httpd\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.169401 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-log-httpd\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.174071 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.174179 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.185424 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-config-data\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.187474 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z677q\" (UniqueName: \"kubernetes.io/projected/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-kube-api-access-z677q\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.189967 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-scripts\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.205802 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.298954 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.361301 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-wltvx"] Feb 17 09:06:30 crc kubenswrapper[4813]: W0217 09:06:30.368204 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2010c77_96c8_40e2_8f5d_0501c6b6ae92.slice/crio-39137adb042d9083ba046611edc673af69b11556e2a54ae1da20529f5cf9da24 WatchSource:0}: Error finding container 39137adb042d9083ba046611edc673af69b11556e2a54ae1da20529f5cf9da24: Status 404 returned error can't find the container with id 39137adb042d9083ba046611edc673af69b11556e2a54ae1da20529f5cf9da24 Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.501800 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-wltvx" event={"ID":"e2010c77-96c8-40e2-8f5d-0501c6b6ae92","Type":"ContainerStarted","Data":"39137adb042d9083ba046611edc673af69b11556e2a54ae1da20529f5cf9da24"} Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.511411 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj"] Feb 17 09:06:30 crc kubenswrapper[4813]: W0217 09:06:30.521379 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24e3b418_2e0b_433a_8dc9_9981dae6bd9e.slice/crio-a6d32a94162ecfe813f1f7ee6e17f14e5be9052e38a2694c9973a1f58c6565a1 WatchSource:0}: Error finding container a6d32a94162ecfe813f1f7ee6e17f14e5be9052e38a2694c9973a1f58c6565a1: Status 404 returned error can't find the container with id a6d32a94162ecfe813f1f7ee6e17f14e5be9052e38a2694c9973a1f58c6565a1 Feb 17 09:06:30 crc kubenswrapper[4813]: I0217 09:06:30.718847 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:30 crc kubenswrapper[4813]: W0217 09:06:30.733563 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf43ce1d8_e6ec_4ba7_8d7f_69584271ea73.slice/crio-03e636ce7258ab3469f130b12431c7dbc85a29dbee65f6ea0abc3dd3021d2dbf WatchSource:0}: Error finding container 03e636ce7258ab3469f130b12431c7dbc85a29dbee65f6ea0abc3dd3021d2dbf: Status 404 returned error can't find the container with id 03e636ce7258ab3469f130b12431c7dbc85a29dbee65f6ea0abc3dd3021d2dbf Feb 17 09:06:31 crc kubenswrapper[4813]: I0217 09:06:31.127382 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f340f06-c740-4d76-9704-981cf40dccc5" path="/var/lib/kubelet/pods/9f340f06-c740-4d76-9704-981cf40dccc5/volumes" Feb 17 09:06:31 crc kubenswrapper[4813]: I0217 09:06:31.517278 4813 generic.go:334] "Generic (PLEG): container finished" podID="e2010c77-96c8-40e2-8f5d-0501c6b6ae92" containerID="48b5263416213597e92c2a126e8567c18d9e49ae9e9f0cbbacd991ec2e795c7c" exitCode=0 Feb 17 09:06:31 crc kubenswrapper[4813]: I0217 09:06:31.517357 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-wltvx" event={"ID":"e2010c77-96c8-40e2-8f5d-0501c6b6ae92","Type":"ContainerDied","Data":"48b5263416213597e92c2a126e8567c18d9e49ae9e9f0cbbacd991ec2e795c7c"} Feb 17 09:06:31 crc kubenswrapper[4813]: I0217 09:06:31.520025 4813 generic.go:334] "Generic (PLEG): container finished" podID="24e3b418-2e0b-433a-8dc9-9981dae6bd9e" containerID="b08eed0991b86ea479de56fdc681732c3942f2fb1c9ce6c81440dbf4cae6b868" exitCode=0 Feb 17 09:06:31 crc kubenswrapper[4813]: I0217 09:06:31.520117 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj" event={"ID":"24e3b418-2e0b-433a-8dc9-9981dae6bd9e","Type":"ContainerDied","Data":"b08eed0991b86ea479de56fdc681732c3942f2fb1c9ce6c81440dbf4cae6b868"} Feb 17 09:06:31 crc kubenswrapper[4813]: I0217 09:06:31.520170 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj" event={"ID":"24e3b418-2e0b-433a-8dc9-9981dae6bd9e","Type":"ContainerStarted","Data":"a6d32a94162ecfe813f1f7ee6e17f14e5be9052e38a2694c9973a1f58c6565a1"} Feb 17 09:06:31 crc kubenswrapper[4813]: I0217 09:06:31.522401 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73","Type":"ContainerStarted","Data":"4d92719704095f2f4e7c85fb9df168496a93fa8ed9c186a8a0460300a47fb926"} Feb 17 09:06:31 crc kubenswrapper[4813]: I0217 09:06:31.522441 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73","Type":"ContainerStarted","Data":"03e636ce7258ab3469f130b12431c7dbc85a29dbee65f6ea0abc3dd3021d2dbf"} Feb 17 09:06:32 crc kubenswrapper[4813]: I0217 09:06:32.532048 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73","Type":"ContainerStarted","Data":"db8ee3c36e1d187dd6d9c80ad1285f04236f28598d31fe54e42c4a1d8f5d9f18"} Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.106407 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj" Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.110920 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:06:33 crc kubenswrapper[4813]: E0217 09:06:33.111402 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.115387 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-wltvx" Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.226165 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t8td\" (UniqueName: \"kubernetes.io/projected/24e3b418-2e0b-433a-8dc9-9981dae6bd9e-kube-api-access-5t8td\") pod \"24e3b418-2e0b-433a-8dc9-9981dae6bd9e\" (UID: \"24e3b418-2e0b-433a-8dc9-9981dae6bd9e\") " Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.226284 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2010c77-96c8-40e2-8f5d-0501c6b6ae92-operator-scripts\") pod \"e2010c77-96c8-40e2-8f5d-0501c6b6ae92\" (UID: \"e2010c77-96c8-40e2-8f5d-0501c6b6ae92\") " Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.226452 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e3b418-2e0b-433a-8dc9-9981dae6bd9e-operator-scripts\") pod \"24e3b418-2e0b-433a-8dc9-9981dae6bd9e\" (UID: \"24e3b418-2e0b-433a-8dc9-9981dae6bd9e\") " Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.226510 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwtv2\" (UniqueName: \"kubernetes.io/projected/e2010c77-96c8-40e2-8f5d-0501c6b6ae92-kube-api-access-qwtv2\") pod \"e2010c77-96c8-40e2-8f5d-0501c6b6ae92\" (UID: \"e2010c77-96c8-40e2-8f5d-0501c6b6ae92\") " Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.228679 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24e3b418-2e0b-433a-8dc9-9981dae6bd9e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24e3b418-2e0b-433a-8dc9-9981dae6bd9e" (UID: "24e3b418-2e0b-433a-8dc9-9981dae6bd9e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.229049 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2010c77-96c8-40e2-8f5d-0501c6b6ae92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2010c77-96c8-40e2-8f5d-0501c6b6ae92" (UID: "e2010c77-96c8-40e2-8f5d-0501c6b6ae92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.234770 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2010c77-96c8-40e2-8f5d-0501c6b6ae92-kube-api-access-qwtv2" (OuterVolumeSpecName: "kube-api-access-qwtv2") pod "e2010c77-96c8-40e2-8f5d-0501c6b6ae92" (UID: "e2010c77-96c8-40e2-8f5d-0501c6b6ae92"). InnerVolumeSpecName "kube-api-access-qwtv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.234879 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e3b418-2e0b-433a-8dc9-9981dae6bd9e-kube-api-access-5t8td" (OuterVolumeSpecName: "kube-api-access-5t8td") pod "24e3b418-2e0b-433a-8dc9-9981dae6bd9e" (UID: "24e3b418-2e0b-433a-8dc9-9981dae6bd9e"). InnerVolumeSpecName "kube-api-access-5t8td". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.328234 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t8td\" (UniqueName: \"kubernetes.io/projected/24e3b418-2e0b-433a-8dc9-9981dae6bd9e-kube-api-access-5t8td\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.328476 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2010c77-96c8-40e2-8f5d-0501c6b6ae92-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.328486 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24e3b418-2e0b-433a-8dc9-9981dae6bd9e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.328494 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwtv2\" (UniqueName: \"kubernetes.io/projected/e2010c77-96c8-40e2-8f5d-0501c6b6ae92-kube-api-access-qwtv2\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.540127 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-wltvx" event={"ID":"e2010c77-96c8-40e2-8f5d-0501c6b6ae92","Type":"ContainerDied","Data":"39137adb042d9083ba046611edc673af69b11556e2a54ae1da20529f5cf9da24"} Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.540148 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-wltvx" Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.540166 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39137adb042d9083ba046611edc673af69b11556e2a54ae1da20529f5cf9da24" Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.541718 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj" Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.541858 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj" event={"ID":"24e3b418-2e0b-433a-8dc9-9981dae6bd9e","Type":"ContainerDied","Data":"a6d32a94162ecfe813f1f7ee6e17f14e5be9052e38a2694c9973a1f58c6565a1"} Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.541904 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6d32a94162ecfe813f1f7ee6e17f14e5be9052e38a2694c9973a1f58c6565a1" Feb 17 09:06:33 crc kubenswrapper[4813]: I0217 09:06:33.544266 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73","Type":"ContainerStarted","Data":"b525a603c721ce076cd383eab569e4d8f1046c96a200a6216971dc2bef49d7e7"} Feb 17 09:06:34 crc kubenswrapper[4813]: I0217 09:06:34.554781 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73","Type":"ContainerStarted","Data":"035306304e779493a4416fdcda39a50c1f5584946cc3ac26e2a8c46483c928ef"} Feb 17 09:06:34 crc kubenswrapper[4813]: I0217 09:06:34.555327 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:34 crc kubenswrapper[4813]: I0217 09:06:34.575594 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.488636615 podStartE2EDuration="5.575572922s" podCreationTimestamp="2026-02-17 09:06:29 +0000 UTC" firstStartedPulling="2026-02-17 09:06:30.735803836 +0000 UTC m=+1538.396565059" lastFinishedPulling="2026-02-17 09:06:33.822740133 +0000 UTC m=+1541.483501366" observedRunningTime="2026-02-17 09:06:34.574040918 +0000 UTC m=+1542.234802151" watchObservedRunningTime="2026-02-17 09:06:34.575572922 +0000 UTC m=+1542.236334145" Feb 17 09:06:34 crc kubenswrapper[4813]: I0217 09:06:34.864388 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn"] Feb 17 09:06:34 crc kubenswrapper[4813]: E0217 09:06:34.864712 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2010c77-96c8-40e2-8f5d-0501c6b6ae92" containerName="mariadb-database-create" Feb 17 09:06:34 crc kubenswrapper[4813]: I0217 09:06:34.864727 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2010c77-96c8-40e2-8f5d-0501c6b6ae92" containerName="mariadb-database-create" Feb 17 09:06:34 crc kubenswrapper[4813]: E0217 09:06:34.864747 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e3b418-2e0b-433a-8dc9-9981dae6bd9e" containerName="mariadb-account-create-update" Feb 17 09:06:34 crc kubenswrapper[4813]: I0217 09:06:34.864754 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e3b418-2e0b-433a-8dc9-9981dae6bd9e" containerName="mariadb-account-create-update" Feb 17 09:06:34 crc kubenswrapper[4813]: I0217 09:06:34.864889 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2010c77-96c8-40e2-8f5d-0501c6b6ae92" containerName="mariadb-database-create" Feb 17 09:06:34 crc kubenswrapper[4813]: I0217 09:06:34.864905 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e3b418-2e0b-433a-8dc9-9981dae6bd9e" containerName="mariadb-account-create-update" Feb 17 09:06:34 crc kubenswrapper[4813]: I0217 09:06:34.865462 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" Feb 17 09:06:34 crc kubenswrapper[4813]: I0217 09:06:34.869738 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-x5wmm" Feb 17 09:06:34 crc kubenswrapper[4813]: I0217 09:06:34.870025 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 17 09:06:34 crc kubenswrapper[4813]: I0217 09:06:34.878547 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn"] Feb 17 09:06:34 crc kubenswrapper[4813]: I0217 09:06:34.958446 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-db-sync-config-data\") pod \"watcher-kuttl-db-sync-zn7pn\" (UID: \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" Feb 17 09:06:34 crc kubenswrapper[4813]: I0217 09:06:34.958500 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-config-data\") pod \"watcher-kuttl-db-sync-zn7pn\" (UID: \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" Feb 17 09:06:34 crc kubenswrapper[4813]: I0217 09:06:34.958633 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-zn7pn\" (UID: \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" Feb 17 09:06:34 crc kubenswrapper[4813]: I0217 09:06:34.958673 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dh6z\" (UniqueName: \"kubernetes.io/projected/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-kube-api-access-5dh6z\") pod \"watcher-kuttl-db-sync-zn7pn\" (UID: \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" Feb 17 09:06:35 crc kubenswrapper[4813]: I0217 09:06:35.060546 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-zn7pn\" (UID: \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" Feb 17 09:06:35 crc kubenswrapper[4813]: I0217 09:06:35.060590 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dh6z\" (UniqueName: \"kubernetes.io/projected/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-kube-api-access-5dh6z\") pod \"watcher-kuttl-db-sync-zn7pn\" (UID: \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" Feb 17 09:06:35 crc kubenswrapper[4813]: I0217 09:06:35.060667 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-db-sync-config-data\") pod \"watcher-kuttl-db-sync-zn7pn\" (UID: \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" Feb 17 09:06:35 crc kubenswrapper[4813]: I0217 09:06:35.060692 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-config-data\") pod \"watcher-kuttl-db-sync-zn7pn\" (UID: \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" Feb 17 09:06:35 crc kubenswrapper[4813]: I0217 09:06:35.065607 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-config-data\") pod \"watcher-kuttl-db-sync-zn7pn\" (UID: \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" Feb 17 09:06:35 crc kubenswrapper[4813]: I0217 09:06:35.065655 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-zn7pn\" (UID: \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" Feb 17 09:06:35 crc kubenswrapper[4813]: I0217 09:06:35.067207 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-db-sync-config-data\") pod \"watcher-kuttl-db-sync-zn7pn\" (UID: \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" Feb 17 09:06:35 crc kubenswrapper[4813]: I0217 09:06:35.082436 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dh6z\" (UniqueName: \"kubernetes.io/projected/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-kube-api-access-5dh6z\") pod \"watcher-kuttl-db-sync-zn7pn\" (UID: \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" Feb 17 09:06:35 crc kubenswrapper[4813]: I0217 09:06:35.195360 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" Feb 17 09:06:35 crc kubenswrapper[4813]: I0217 09:06:35.674356 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn"] Feb 17 09:06:35 crc kubenswrapper[4813]: W0217 09:06:35.690123 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5e6c1c8_a518_48a9_b56a_34fdd8c847af.slice/crio-d2fc2875decfba206a391b654ec2734e8ce91b2df9ac592b5288d7f16b001d89 WatchSource:0}: Error finding container d2fc2875decfba206a391b654ec2734e8ce91b2df9ac592b5288d7f16b001d89: Status 404 returned error can't find the container with id d2fc2875decfba206a391b654ec2734e8ce91b2df9ac592b5288d7f16b001d89 Feb 17 09:06:36 crc kubenswrapper[4813]: I0217 09:06:36.571612 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" event={"ID":"c5e6c1c8-a518-48a9-b56a-34fdd8c847af","Type":"ContainerStarted","Data":"8b38453ed88cba3c9d6b0c7cd5f0919fe5c550faa5257d5f2098b1c1f57d1b06"} Feb 17 09:06:36 crc kubenswrapper[4813]: I0217 09:06:36.571959 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" event={"ID":"c5e6c1c8-a518-48a9-b56a-34fdd8c847af","Type":"ContainerStarted","Data":"d2fc2875decfba206a391b654ec2734e8ce91b2df9ac592b5288d7f16b001d89"} Feb 17 09:06:36 crc kubenswrapper[4813]: I0217 09:06:36.609651 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" podStartSLOduration=2.609635082 podStartE2EDuration="2.609635082s" podCreationTimestamp="2026-02-17 09:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:06:36.60711911 +0000 UTC m=+1544.267880333" watchObservedRunningTime="2026-02-17 09:06:36.609635082 +0000 UTC m=+1544.270396305" Feb 17 09:06:36 crc kubenswrapper[4813]: I0217 09:06:36.785390 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h4rxm"] Feb 17 09:06:36 crc kubenswrapper[4813]: I0217 09:06:36.795909 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:36 crc kubenswrapper[4813]: I0217 09:06:36.817183 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4rxm"] Feb 17 09:06:36 crc kubenswrapper[4813]: I0217 09:06:36.894324 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fbgv\" (UniqueName: \"kubernetes.io/projected/678cdd4d-9d0e-4e2a-b9de-096d52ced581-kube-api-access-8fbgv\") pod \"certified-operators-h4rxm\" (UID: \"678cdd4d-9d0e-4e2a-b9de-096d52ced581\") " pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:36 crc kubenswrapper[4813]: I0217 09:06:36.894375 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678cdd4d-9d0e-4e2a-b9de-096d52ced581-utilities\") pod \"certified-operators-h4rxm\" (UID: \"678cdd4d-9d0e-4e2a-b9de-096d52ced581\") " pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:36 crc kubenswrapper[4813]: I0217 09:06:36.894430 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678cdd4d-9d0e-4e2a-b9de-096d52ced581-catalog-content\") pod \"certified-operators-h4rxm\" (UID: \"678cdd4d-9d0e-4e2a-b9de-096d52ced581\") " pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:36 crc kubenswrapper[4813]: I0217 09:06:36.996431 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fbgv\" (UniqueName: \"kubernetes.io/projected/678cdd4d-9d0e-4e2a-b9de-096d52ced581-kube-api-access-8fbgv\") pod \"certified-operators-h4rxm\" (UID: \"678cdd4d-9d0e-4e2a-b9de-096d52ced581\") " pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:36 crc kubenswrapper[4813]: I0217 09:06:36.996486 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678cdd4d-9d0e-4e2a-b9de-096d52ced581-utilities\") pod \"certified-operators-h4rxm\" (UID: \"678cdd4d-9d0e-4e2a-b9de-096d52ced581\") " pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:36 crc kubenswrapper[4813]: I0217 09:06:36.996560 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678cdd4d-9d0e-4e2a-b9de-096d52ced581-catalog-content\") pod \"certified-operators-h4rxm\" (UID: \"678cdd4d-9d0e-4e2a-b9de-096d52ced581\") " pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:36 crc kubenswrapper[4813]: I0217 09:06:36.997176 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678cdd4d-9d0e-4e2a-b9de-096d52ced581-utilities\") pod \"certified-operators-h4rxm\" (UID: \"678cdd4d-9d0e-4e2a-b9de-096d52ced581\") " pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:36 crc kubenswrapper[4813]: I0217 09:06:36.997203 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678cdd4d-9d0e-4e2a-b9de-096d52ced581-catalog-content\") pod \"certified-operators-h4rxm\" (UID: \"678cdd4d-9d0e-4e2a-b9de-096d52ced581\") " pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:37 crc kubenswrapper[4813]: I0217 09:06:37.027136 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fbgv\" (UniqueName: \"kubernetes.io/projected/678cdd4d-9d0e-4e2a-b9de-096d52ced581-kube-api-access-8fbgv\") pod \"certified-operators-h4rxm\" (UID: \"678cdd4d-9d0e-4e2a-b9de-096d52ced581\") " pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:37 crc kubenswrapper[4813]: I0217 09:06:37.124881 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:37 crc kubenswrapper[4813]: I0217 09:06:37.675105 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4rxm"] Feb 17 09:06:37 crc kubenswrapper[4813]: W0217 09:06:37.675737 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod678cdd4d_9d0e_4e2a_b9de_096d52ced581.slice/crio-14c2224695fb9aa48e320df80ad7a756c38d5dfe47cd7d6a4b5f058eb8e12a99 WatchSource:0}: Error finding container 14c2224695fb9aa48e320df80ad7a756c38d5dfe47cd7d6a4b5f058eb8e12a99: Status 404 returned error can't find the container with id 14c2224695fb9aa48e320df80ad7a756c38d5dfe47cd7d6a4b5f058eb8e12a99 Feb 17 09:06:38 crc kubenswrapper[4813]: I0217 09:06:38.587552 4813 generic.go:334] "Generic (PLEG): container finished" podID="c5e6c1c8-a518-48a9-b56a-34fdd8c847af" containerID="8b38453ed88cba3c9d6b0c7cd5f0919fe5c550faa5257d5f2098b1c1f57d1b06" exitCode=0 Feb 17 09:06:38 crc kubenswrapper[4813]: I0217 09:06:38.587624 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" event={"ID":"c5e6c1c8-a518-48a9-b56a-34fdd8c847af","Type":"ContainerDied","Data":"8b38453ed88cba3c9d6b0c7cd5f0919fe5c550faa5257d5f2098b1c1f57d1b06"} Feb 17 09:06:38 crc kubenswrapper[4813]: I0217 09:06:38.590469 4813 generic.go:334] "Generic (PLEG): container finished" podID="678cdd4d-9d0e-4e2a-b9de-096d52ced581" containerID="c59fe5bf139cf63f1f6f75ff9b2885659d59271d431fe3f3a234028b63c17f00" exitCode=0 Feb 17 09:06:38 crc kubenswrapper[4813]: I0217 09:06:38.590506 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4rxm" event={"ID":"678cdd4d-9d0e-4e2a-b9de-096d52ced581","Type":"ContainerDied","Data":"c59fe5bf139cf63f1f6f75ff9b2885659d59271d431fe3f3a234028b63c17f00"} Feb 17 09:06:38 crc kubenswrapper[4813]: I0217 09:06:38.590538 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4rxm" event={"ID":"678cdd4d-9d0e-4e2a-b9de-096d52ced581","Type":"ContainerStarted","Data":"14c2224695fb9aa48e320df80ad7a756c38d5dfe47cd7d6a4b5f058eb8e12a99"} Feb 17 09:06:39 crc kubenswrapper[4813]: I0217 09:06:39.607922 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4rxm" event={"ID":"678cdd4d-9d0e-4e2a-b9de-096d52ced581","Type":"ContainerStarted","Data":"fb635d52ad60a0eb8d766e26c6af78ff6326c0fe12a13cb6380bf9d60d1896f4"} Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.063115 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.157731 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-combined-ca-bundle\") pod \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\" (UID: \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\") " Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.158535 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-config-data\") pod \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\" (UID: \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\") " Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.158671 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dh6z\" (UniqueName: \"kubernetes.io/projected/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-kube-api-access-5dh6z\") pod \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\" (UID: \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\") " Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.158831 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-db-sync-config-data\") pod \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\" (UID: \"c5e6c1c8-a518-48a9-b56a-34fdd8c847af\") " Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.163411 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c5e6c1c8-a518-48a9-b56a-34fdd8c847af" (UID: "c5e6c1c8-a518-48a9-b56a-34fdd8c847af"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.168534 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-kube-api-access-5dh6z" (OuterVolumeSpecName: "kube-api-access-5dh6z") pod "c5e6c1c8-a518-48a9-b56a-34fdd8c847af" (UID: "c5e6c1c8-a518-48a9-b56a-34fdd8c847af"). InnerVolumeSpecName "kube-api-access-5dh6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.190459 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5e6c1c8-a518-48a9-b56a-34fdd8c847af" (UID: "c5e6c1c8-a518-48a9-b56a-34fdd8c847af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.256046 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-config-data" (OuterVolumeSpecName: "config-data") pod "c5e6c1c8-a518-48a9-b56a-34fdd8c847af" (UID: "c5e6c1c8-a518-48a9-b56a-34fdd8c847af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.261495 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.261541 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.261556 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.261569 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dh6z\" (UniqueName: \"kubernetes.io/projected/c5e6c1c8-a518-48a9-b56a-34fdd8c847af-kube-api-access-5dh6z\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.619422 4813 generic.go:334] "Generic (PLEG): container finished" podID="678cdd4d-9d0e-4e2a-b9de-096d52ced581" containerID="fb635d52ad60a0eb8d766e26c6af78ff6326c0fe12a13cb6380bf9d60d1896f4" exitCode=0 Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.619514 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4rxm" event={"ID":"678cdd4d-9d0e-4e2a-b9de-096d52ced581","Type":"ContainerDied","Data":"fb635d52ad60a0eb8d766e26c6af78ff6326c0fe12a13cb6380bf9d60d1896f4"} Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.623836 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" event={"ID":"c5e6c1c8-a518-48a9-b56a-34fdd8c847af","Type":"ContainerDied","Data":"d2fc2875decfba206a391b654ec2734e8ce91b2df9ac592b5288d7f16b001d89"} Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.623877 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2fc2875decfba206a391b654ec2734e8ce91b2df9ac592b5288d7f16b001d89" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.624006 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.934907 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:06:40 crc kubenswrapper[4813]: E0217 09:06:40.935298 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e6c1c8-a518-48a9-b56a-34fdd8c847af" containerName="watcher-kuttl-db-sync" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.935330 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e6c1c8-a518-48a9-b56a-34fdd8c847af" containerName="watcher-kuttl-db-sync" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.935492 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e6c1c8-a518-48a9-b56a-34fdd8c847af" containerName="watcher-kuttl-db-sync" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.936351 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.938795 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.939007 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-x5wmm" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.941862 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.942933 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.945255 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.951602 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:06:40 crc kubenswrapper[4813]: I0217 09:06:40.992538 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.016385 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.017652 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.021074 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.029599 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120101 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120138 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120161 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120210 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86nbs\" (UniqueName: \"kubernetes.io/projected/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-kube-api-access-86nbs\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120349 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120395 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120441 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120470 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-logs\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120502 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkdwr\" (UniqueName: \"kubernetes.io/projected/12e51adc-ff2c-45a8-85f2-395fbb065e5e-kube-api-access-gkdwr\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120548 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120584 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m2sf\" (UniqueName: \"kubernetes.io/projected/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-kube-api-access-6m2sf\") pod \"watcher-kuttl-applier-0\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120652 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120692 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120752 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120772 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12e51adc-ff2c-45a8-85f2-395fbb065e5e-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120794 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.120816 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.222359 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.222443 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.222486 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.222509 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.222575 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86nbs\" (UniqueName: \"kubernetes.io/projected/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-kube-api-access-86nbs\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.222599 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.222815 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.223358 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.223916 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.223964 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-logs\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.223995 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkdwr\" (UniqueName: \"kubernetes.io/projected/12e51adc-ff2c-45a8-85f2-395fbb065e5e-kube-api-access-gkdwr\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.224032 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.224063 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m2sf\" (UniqueName: \"kubernetes.io/projected/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-kube-api-access-6m2sf\") pod \"watcher-kuttl-applier-0\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.224113 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.224137 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.224201 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.224240 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12e51adc-ff2c-45a8-85f2-395fbb065e5e-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.224346 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.224816 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-logs\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.225892 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12e51adc-ff2c-45a8-85f2-395fbb065e5e-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.227514 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.227538 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.228099 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.228458 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.228576 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.229591 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.229866 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.230012 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.233760 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.236523 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.244808 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.244825 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m2sf\" (UniqueName: \"kubernetes.io/projected/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-kube-api-access-6m2sf\") pod \"watcher-kuttl-applier-0\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.244916 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86nbs\" (UniqueName: \"kubernetes.io/projected/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-kube-api-access-86nbs\") pod \"watcher-kuttl-api-0\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.247826 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkdwr\" (UniqueName: \"kubernetes.io/projected/12e51adc-ff2c-45a8-85f2-395fbb065e5e-kube-api-access-gkdwr\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.258697 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.259628 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.339263 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.651283 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4rxm" event={"ID":"678cdd4d-9d0e-4e2a-b9de-096d52ced581","Type":"ContainerStarted","Data":"374ff77674375eed9208aa3d0d543721ea5d8be6269533b862eef687d77ea04f"} Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.678366 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h4rxm" podStartSLOduration=3.246299262 podStartE2EDuration="5.678346534s" podCreationTimestamp="2026-02-17 09:06:36 +0000 UTC" firstStartedPulling="2026-02-17 09:06:38.591993884 +0000 UTC m=+1546.252755107" lastFinishedPulling="2026-02-17 09:06:41.024041156 +0000 UTC m=+1548.684802379" observedRunningTime="2026-02-17 09:06:41.677915932 +0000 UTC m=+1549.338677155" watchObservedRunningTime="2026-02-17 09:06:41.678346534 +0000 UTC m=+1549.339107757" Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.849751 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.902667 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:06:41 crc kubenswrapper[4813]: I0217 09:06:41.938286 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:06:41 crc kubenswrapper[4813]: W0217 09:06:41.949082 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fecfdd5_e56e_43ca_88f3_2e5922c30afe.slice/crio-3015319753ef5e472e1017f0d01bd0bda2d8297ad2f7d0feb04b09a5445f0790 WatchSource:0}: Error finding container 3015319753ef5e472e1017f0d01bd0bda2d8297ad2f7d0feb04b09a5445f0790: Status 404 returned error can't find the container with id 3015319753ef5e472e1017f0d01bd0bda2d8297ad2f7d0feb04b09a5445f0790 Feb 17 09:06:42 crc kubenswrapper[4813]: I0217 09:06:42.659757 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"12e51adc-ff2c-45a8-85f2-395fbb065e5e","Type":"ContainerStarted","Data":"b123b1de27be1dd37fd9ca53f45bca5ddd912cfb938c3bf35b5eb51143848120"} Feb 17 09:06:42 crc kubenswrapper[4813]: I0217 09:06:42.660110 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"12e51adc-ff2c-45a8-85f2-395fbb065e5e","Type":"ContainerStarted","Data":"64bb94a1948fcbdf0443e3b804d581b02cbd71c68a91a277d4a112aef4cdd6d2"} Feb 17 09:06:42 crc kubenswrapper[4813]: I0217 09:06:42.661438 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce","Type":"ContainerStarted","Data":"ca6aee87ce0eb2ef212797e6968c69a386b919e02ad618eb063ded507395d8ef"} Feb 17 09:06:42 crc kubenswrapper[4813]: I0217 09:06:42.661479 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce","Type":"ContainerStarted","Data":"2b6e96e3f8974e3e73b1d26ba04c0d1eb7a1bc11f4df592338cacd2623f0af87"} Feb 17 09:06:42 crc kubenswrapper[4813]: I0217 09:06:42.661497 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce","Type":"ContainerStarted","Data":"b2dfce26e6bd73c0f99ca1052429b45137339456431381ee116f074f8f7bd90b"} Feb 17 09:06:42 crc kubenswrapper[4813]: I0217 09:06:42.661848 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:42 crc kubenswrapper[4813]: I0217 09:06:42.665107 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"6fecfdd5-e56e-43ca-88f3-2e5922c30afe","Type":"ContainerStarted","Data":"12246ac818dc3e4e3b9cfa3e4a41acf505e6009daafe4d541ed1eadb4c3bb758"} Feb 17 09:06:42 crc kubenswrapper[4813]: I0217 09:06:42.665139 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"6fecfdd5-e56e-43ca-88f3-2e5922c30afe","Type":"ContainerStarted","Data":"3015319753ef5e472e1017f0d01bd0bda2d8297ad2f7d0feb04b09a5445f0790"} Feb 17 09:06:42 crc kubenswrapper[4813]: I0217 09:06:42.715712 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.715698347 podStartE2EDuration="2.715698347s" podCreationTimestamp="2026-02-17 09:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:06:42.714351728 +0000 UTC m=+1550.375112951" watchObservedRunningTime="2026-02-17 09:06:42.715698347 +0000 UTC m=+1550.376459570" Feb 17 09:06:42 crc kubenswrapper[4813]: I0217 09:06:42.733450 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.733430754 podStartE2EDuration="2.733430754s" podCreationTimestamp="2026-02-17 09:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:06:42.698201266 +0000 UTC m=+1550.358962499" watchObservedRunningTime="2026-02-17 09:06:42.733430754 +0000 UTC m=+1550.394191977" Feb 17 09:06:42 crc kubenswrapper[4813]: I0217 09:06:42.748230 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.7482101070000002 podStartE2EDuration="2.748210107s" podCreationTimestamp="2026-02-17 09:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:06:42.745378976 +0000 UTC m=+1550.406140219" watchObservedRunningTime="2026-02-17 09:06:42.748210107 +0000 UTC m=+1550.408971350" Feb 17 09:06:44 crc kubenswrapper[4813]: I0217 09:06:44.112753 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:06:44 crc kubenswrapper[4813]: E0217 09:06:44.113269 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:06:44 crc kubenswrapper[4813]: I0217 09:06:44.982092 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:46 crc kubenswrapper[4813]: I0217 09:06:46.259168 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:46 crc kubenswrapper[4813]: I0217 09:06:46.340267 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:47 crc kubenswrapper[4813]: I0217 09:06:47.125411 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:47 crc kubenswrapper[4813]: I0217 09:06:47.125462 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:47 crc kubenswrapper[4813]: I0217 09:06:47.177794 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:47 crc kubenswrapper[4813]: I0217 09:06:47.794127 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:48 crc kubenswrapper[4813]: I0217 09:06:48.587704 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p287t"] Feb 17 09:06:48 crc kubenswrapper[4813]: I0217 09:06:48.590849 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:06:48 crc kubenswrapper[4813]: I0217 09:06:48.612624 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p287t"] Feb 17 09:06:48 crc kubenswrapper[4813]: I0217 09:06:48.765189 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-utilities\") pod \"redhat-marketplace-p287t\" (UID: \"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e\") " pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:06:48 crc kubenswrapper[4813]: I0217 09:06:48.765564 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qp66\" (UniqueName: \"kubernetes.io/projected/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-kube-api-access-5qp66\") pod \"redhat-marketplace-p287t\" (UID: \"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e\") " pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:06:48 crc kubenswrapper[4813]: I0217 09:06:48.765721 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-catalog-content\") pod \"redhat-marketplace-p287t\" (UID: \"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e\") " pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:06:48 crc kubenswrapper[4813]: I0217 09:06:48.867173 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-utilities\") pod \"redhat-marketplace-p287t\" (UID: \"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e\") " pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:06:48 crc kubenswrapper[4813]: I0217 09:06:48.867622 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-utilities\") pod \"redhat-marketplace-p287t\" (UID: \"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e\") " pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:06:48 crc kubenswrapper[4813]: I0217 09:06:48.868112 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qp66\" (UniqueName: \"kubernetes.io/projected/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-kube-api-access-5qp66\") pod \"redhat-marketplace-p287t\" (UID: \"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e\") " pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:06:48 crc kubenswrapper[4813]: I0217 09:06:48.868267 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-catalog-content\") pod \"redhat-marketplace-p287t\" (UID: \"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e\") " pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:06:48 crc kubenswrapper[4813]: I0217 09:06:48.868665 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-catalog-content\") pod \"redhat-marketplace-p287t\" (UID: \"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e\") " pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:06:48 crc kubenswrapper[4813]: I0217 09:06:48.890424 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qp66\" (UniqueName: \"kubernetes.io/projected/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-kube-api-access-5qp66\") pod \"redhat-marketplace-p287t\" (UID: \"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e\") " pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:06:48 crc kubenswrapper[4813]: I0217 09:06:48.916820 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:06:49 crc kubenswrapper[4813]: I0217 09:06:49.417854 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p287t"] Feb 17 09:06:49 crc kubenswrapper[4813]: W0217 09:06:49.420624 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcadb58ec_d8e5_4f81_b0fb_5233d5c31d3e.slice/crio-d06a74a33479803be5257beb2c81a54585905c65fe735838d3d20015762d3490 WatchSource:0}: Error finding container d06a74a33479803be5257beb2c81a54585905c65fe735838d3d20015762d3490: Status 404 returned error can't find the container with id d06a74a33479803be5257beb2c81a54585905c65fe735838d3d20015762d3490 Feb 17 09:06:49 crc kubenswrapper[4813]: I0217 09:06:49.733218 4813 generic.go:334] "Generic (PLEG): container finished" podID="cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e" containerID="a2c9500705dbb3d0609f24e6bcbb76c7050854c9701b12579c6857b80ff8c9d9" exitCode=0 Feb 17 09:06:49 crc kubenswrapper[4813]: I0217 09:06:49.733426 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p287t" event={"ID":"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e","Type":"ContainerDied","Data":"a2c9500705dbb3d0609f24e6bcbb76c7050854c9701b12579c6857b80ff8c9d9"} Feb 17 09:06:49 crc kubenswrapper[4813]: I0217 09:06:49.733588 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p287t" event={"ID":"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e","Type":"ContainerStarted","Data":"d06a74a33479803be5257beb2c81a54585905c65fe735838d3d20015762d3490"} Feb 17 09:06:50 crc kubenswrapper[4813]: I0217 09:06:50.747056 4813 generic.go:334] "Generic (PLEG): container finished" podID="cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e" containerID="7bdbee95059c9cb9f8247a540dafff3e7c2af5f9b3b77158ee6c6f3a717125da" exitCode=0 Feb 17 09:06:50 crc kubenswrapper[4813]: I0217 09:06:50.747897 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p287t" event={"ID":"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e","Type":"ContainerDied","Data":"7bdbee95059c9cb9f8247a540dafff3e7c2af5f9b3b77158ee6c6f3a717125da"} Feb 17 09:06:51 crc kubenswrapper[4813]: I0217 09:06:51.832586 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:51 crc kubenswrapper[4813]: I0217 09:06:51.832653 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:51 crc kubenswrapper[4813]: I0217 09:06:51.879762 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:51 crc kubenswrapper[4813]: I0217 09:06:51.890498 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:51 crc kubenswrapper[4813]: I0217 09:06:51.917385 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4rxm"] Feb 17 09:06:51 crc kubenswrapper[4813]: I0217 09:06:51.918058 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h4rxm" podUID="678cdd4d-9d0e-4e2a-b9de-096d52ced581" containerName="registry-server" containerID="cri-o://374ff77674375eed9208aa3d0d543721ea5d8be6269533b862eef687d77ea04f" gracePeriod=2 Feb 17 09:06:51 crc kubenswrapper[4813]: I0217 09:06:51.944645 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:51 crc kubenswrapper[4813]: I0217 09:06:51.948387 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.522884 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.549851 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fbgv\" (UniqueName: \"kubernetes.io/projected/678cdd4d-9d0e-4e2a-b9de-096d52ced581-kube-api-access-8fbgv\") pod \"678cdd4d-9d0e-4e2a-b9de-096d52ced581\" (UID: \"678cdd4d-9d0e-4e2a-b9de-096d52ced581\") " Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.549916 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678cdd4d-9d0e-4e2a-b9de-096d52ced581-utilities\") pod \"678cdd4d-9d0e-4e2a-b9de-096d52ced581\" (UID: \"678cdd4d-9d0e-4e2a-b9de-096d52ced581\") " Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.550067 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678cdd4d-9d0e-4e2a-b9de-096d52ced581-catalog-content\") pod \"678cdd4d-9d0e-4e2a-b9de-096d52ced581\" (UID: \"678cdd4d-9d0e-4e2a-b9de-096d52ced581\") " Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.550890 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/678cdd4d-9d0e-4e2a-b9de-096d52ced581-utilities" (OuterVolumeSpecName: "utilities") pod "678cdd4d-9d0e-4e2a-b9de-096d52ced581" (UID: "678cdd4d-9d0e-4e2a-b9de-096d52ced581"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.559533 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678cdd4d-9d0e-4e2a-b9de-096d52ced581-kube-api-access-8fbgv" (OuterVolumeSpecName: "kube-api-access-8fbgv") pod "678cdd4d-9d0e-4e2a-b9de-096d52ced581" (UID: "678cdd4d-9d0e-4e2a-b9de-096d52ced581"). InnerVolumeSpecName "kube-api-access-8fbgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.602213 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/678cdd4d-9d0e-4e2a-b9de-096d52ced581-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "678cdd4d-9d0e-4e2a-b9de-096d52ced581" (UID: "678cdd4d-9d0e-4e2a-b9de-096d52ced581"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.652516 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fbgv\" (UniqueName: \"kubernetes.io/projected/678cdd4d-9d0e-4e2a-b9de-096d52ced581-kube-api-access-8fbgv\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.652558 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678cdd4d-9d0e-4e2a-b9de-096d52ced581-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.652570 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678cdd4d-9d0e-4e2a-b9de-096d52ced581-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.875363 4813 generic.go:334] "Generic (PLEG): container finished" podID="678cdd4d-9d0e-4e2a-b9de-096d52ced581" containerID="374ff77674375eed9208aa3d0d543721ea5d8be6269533b862eef687d77ea04f" exitCode=0 Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.875421 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4rxm" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.875438 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4rxm" event={"ID":"678cdd4d-9d0e-4e2a-b9de-096d52ced581","Type":"ContainerDied","Data":"374ff77674375eed9208aa3d0d543721ea5d8be6269533b862eef687d77ea04f"} Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.876416 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4rxm" event={"ID":"678cdd4d-9d0e-4e2a-b9de-096d52ced581","Type":"ContainerDied","Data":"14c2224695fb9aa48e320df80ad7a756c38d5dfe47cd7d6a4b5f058eb8e12a99"} Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.876470 4813 scope.go:117] "RemoveContainer" containerID="374ff77674375eed9208aa3d0d543721ea5d8be6269533b862eef687d77ea04f" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.878929 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p287t" event={"ID":"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e","Type":"ContainerStarted","Data":"ae60818420fec03706d6a59b300415734d443e67a625f3002c048007165cb523"} Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.880981 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.887676 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.922504 4813 scope.go:117] "RemoveContainer" containerID="fb635d52ad60a0eb8d766e26c6af78ff6326c0fe12a13cb6380bf9d60d1896f4" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.938931 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p287t" podStartSLOduration=2.604647543 podStartE2EDuration="4.938912706s" podCreationTimestamp="2026-02-17 09:06:48 +0000 UTC" firstStartedPulling="2026-02-17 09:06:49.734816225 +0000 UTC m=+1557.395577448" lastFinishedPulling="2026-02-17 09:06:52.069081378 +0000 UTC m=+1559.729842611" observedRunningTime="2026-02-17 09:06:52.923922467 +0000 UTC m=+1560.584683690" watchObservedRunningTime="2026-02-17 09:06:52.938912706 +0000 UTC m=+1560.599673929" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.939757 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.943574 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4rxm"] Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.944836 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.950823 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h4rxm"] Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.953820 4813 scope.go:117] "RemoveContainer" containerID="c59fe5bf139cf63f1f6f75ff9b2885659d59271d431fe3f3a234028b63c17f00" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.981816 4813 scope.go:117] "RemoveContainer" containerID="374ff77674375eed9208aa3d0d543721ea5d8be6269533b862eef687d77ea04f" Feb 17 09:06:52 crc kubenswrapper[4813]: E0217 09:06:52.982918 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"374ff77674375eed9208aa3d0d543721ea5d8be6269533b862eef687d77ea04f\": container with ID starting with 374ff77674375eed9208aa3d0d543721ea5d8be6269533b862eef687d77ea04f not found: ID does not exist" containerID="374ff77674375eed9208aa3d0d543721ea5d8be6269533b862eef687d77ea04f" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.983028 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"374ff77674375eed9208aa3d0d543721ea5d8be6269533b862eef687d77ea04f"} err="failed to get container status \"374ff77674375eed9208aa3d0d543721ea5d8be6269533b862eef687d77ea04f\": rpc error: code = NotFound desc = could not find container \"374ff77674375eed9208aa3d0d543721ea5d8be6269533b862eef687d77ea04f\": container with ID starting with 374ff77674375eed9208aa3d0d543721ea5d8be6269533b862eef687d77ea04f not found: ID does not exist" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.983121 4813 scope.go:117] "RemoveContainer" containerID="fb635d52ad60a0eb8d766e26c6af78ff6326c0fe12a13cb6380bf9d60d1896f4" Feb 17 09:06:52 crc kubenswrapper[4813]: E0217 09:06:52.984600 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb635d52ad60a0eb8d766e26c6af78ff6326c0fe12a13cb6380bf9d60d1896f4\": container with ID starting with fb635d52ad60a0eb8d766e26c6af78ff6326c0fe12a13cb6380bf9d60d1896f4 not found: ID does not exist" containerID="fb635d52ad60a0eb8d766e26c6af78ff6326c0fe12a13cb6380bf9d60d1896f4" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.984731 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb635d52ad60a0eb8d766e26c6af78ff6326c0fe12a13cb6380bf9d60d1896f4"} err="failed to get container status \"fb635d52ad60a0eb8d766e26c6af78ff6326c0fe12a13cb6380bf9d60d1896f4\": rpc error: code = NotFound desc = could not find container \"fb635d52ad60a0eb8d766e26c6af78ff6326c0fe12a13cb6380bf9d60d1896f4\": container with ID starting with fb635d52ad60a0eb8d766e26c6af78ff6326c0fe12a13cb6380bf9d60d1896f4 not found: ID does not exist" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.984805 4813 scope.go:117] "RemoveContainer" containerID="c59fe5bf139cf63f1f6f75ff9b2885659d59271d431fe3f3a234028b63c17f00" Feb 17 09:06:52 crc kubenswrapper[4813]: E0217 09:06:52.985510 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c59fe5bf139cf63f1f6f75ff9b2885659d59271d431fe3f3a234028b63c17f00\": container with ID starting with c59fe5bf139cf63f1f6f75ff9b2885659d59271d431fe3f3a234028b63c17f00 not found: ID does not exist" containerID="c59fe5bf139cf63f1f6f75ff9b2885659d59271d431fe3f3a234028b63c17f00" Feb 17 09:06:52 crc kubenswrapper[4813]: I0217 09:06:52.985591 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c59fe5bf139cf63f1f6f75ff9b2885659d59271d431fe3f3a234028b63c17f00"} err="failed to get container status \"c59fe5bf139cf63f1f6f75ff9b2885659d59271d431fe3f3a234028b63c17f00\": rpc error: code = NotFound desc = could not find container \"c59fe5bf139cf63f1f6f75ff9b2885659d59271d431fe3f3a234028b63c17f00\": container with ID starting with c59fe5bf139cf63f1f6f75ff9b2885659d59271d431fe3f3a234028b63c17f00 not found: ID does not exist" Feb 17 09:06:53 crc kubenswrapper[4813]: I0217 09:06:53.131702 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="678cdd4d-9d0e-4e2a-b9de-096d52ced581" path="/var/lib/kubelet/pods/678cdd4d-9d0e-4e2a-b9de-096d52ced581/volumes" Feb 17 09:06:54 crc kubenswrapper[4813]: I0217 09:06:54.811631 4813 scope.go:117] "RemoveContainer" containerID="5752643aefbf275d9c522e56646e9b3bd85cc7b1488cede4c89d7ade8d252022" Feb 17 09:06:54 crc kubenswrapper[4813]: I0217 09:06:54.839023 4813 scope.go:117] "RemoveContainer" containerID="7b7faa8cad4885c4bb81bbe2f67b28008a8cc82c91cedfc3202b95bbc2859418" Feb 17 09:06:55 crc kubenswrapper[4813]: I0217 09:06:55.111740 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:06:55 crc kubenswrapper[4813]: E0217 09:06:55.112180 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:06:55 crc kubenswrapper[4813]: I0217 09:06:55.393052 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:55 crc kubenswrapper[4813]: I0217 09:06:55.393335 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="ceilometer-central-agent" containerID="cri-o://4d92719704095f2f4e7c85fb9df168496a93fa8ed9c186a8a0460300a47fb926" gracePeriod=30 Feb 17 09:06:55 crc kubenswrapper[4813]: I0217 09:06:55.393392 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="sg-core" containerID="cri-o://b525a603c721ce076cd383eab569e4d8f1046c96a200a6216971dc2bef49d7e7" gracePeriod=30 Feb 17 09:06:55 crc kubenswrapper[4813]: I0217 09:06:55.393443 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="ceilometer-notification-agent" containerID="cri-o://db8ee3c36e1d187dd6d9c80ad1285f04236f28598d31fe54e42c4a1d8f5d9f18" gracePeriod=30 Feb 17 09:06:55 crc kubenswrapper[4813]: I0217 09:06:55.393472 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="proxy-httpd" containerID="cri-o://035306304e779493a4416fdcda39a50c1f5584946cc3ac26e2a8c46483c928ef" gracePeriod=30 Feb 17 09:06:55 crc kubenswrapper[4813]: I0217 09:06:55.421237 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.199:3000/\": read tcp 10.217.0.2:36208->10.217.0.199:3000: read: connection reset by peer" Feb 17 09:06:55 crc kubenswrapper[4813]: I0217 09:06:55.929035 4813 generic.go:334] "Generic (PLEG): container finished" podID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerID="035306304e779493a4416fdcda39a50c1f5584946cc3ac26e2a8c46483c928ef" exitCode=0 Feb 17 09:06:55 crc kubenswrapper[4813]: I0217 09:06:55.929063 4813 generic.go:334] "Generic (PLEG): container finished" podID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerID="b525a603c721ce076cd383eab569e4d8f1046c96a200a6216971dc2bef49d7e7" exitCode=2 Feb 17 09:06:55 crc kubenswrapper[4813]: I0217 09:06:55.929072 4813 generic.go:334] "Generic (PLEG): container finished" podID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerID="4d92719704095f2f4e7c85fb9df168496a93fa8ed9c186a8a0460300a47fb926" exitCode=0 Feb 17 09:06:55 crc kubenswrapper[4813]: I0217 09:06:55.929095 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73","Type":"ContainerDied","Data":"035306304e779493a4416fdcda39a50c1f5584946cc3ac26e2a8c46483c928ef"} Feb 17 09:06:55 crc kubenswrapper[4813]: I0217 09:06:55.929122 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73","Type":"ContainerDied","Data":"b525a603c721ce076cd383eab569e4d8f1046c96a200a6216971dc2bef49d7e7"} Feb 17 09:06:55 crc kubenswrapper[4813]: I0217 09:06:55.929136 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73","Type":"ContainerDied","Data":"4d92719704095f2f4e7c85fb9df168496a93fa8ed9c186a8a0460300a47fb926"} Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.409851 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.572971 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-run-httpd\") pod \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.573480 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-ceilometer-tls-certs\") pod \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.573698 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-combined-ca-bundle\") pod \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.573956 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-config-data\") pod \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.573751 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" (UID: "f43ce1d8-e6ec-4ba7-8d7f-69584271ea73"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.574222 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-sg-core-conf-yaml\") pod \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.574479 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-log-httpd\") pod \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.574684 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z677q\" (UniqueName: \"kubernetes.io/projected/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-kube-api-access-z677q\") pod \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.575002 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" (UID: "f43ce1d8-e6ec-4ba7-8d7f-69584271ea73"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.575018 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-scripts\") pod \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\" (UID: \"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73\") " Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.575780 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.575806 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.579260 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-kube-api-access-z677q" (OuterVolumeSpecName: "kube-api-access-z677q") pod "f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" (UID: "f43ce1d8-e6ec-4ba7-8d7f-69584271ea73"). InnerVolumeSpecName "kube-api-access-z677q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.585149 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-scripts" (OuterVolumeSpecName: "scripts") pod "f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" (UID: "f43ce1d8-e6ec-4ba7-8d7f-69584271ea73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.609358 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" (UID: "f43ce1d8-e6ec-4ba7-8d7f-69584271ea73"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.643948 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" (UID: "f43ce1d8-e6ec-4ba7-8d7f-69584271ea73"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.644702 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" (UID: "f43ce1d8-e6ec-4ba7-8d7f-69584271ea73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.662978 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-config-data" (OuterVolumeSpecName: "config-data") pod "f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" (UID: "f43ce1d8-e6ec-4ba7-8d7f-69584271ea73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.677902 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.677952 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.677973 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.677990 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.678007 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.678023 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z677q\" (UniqueName: \"kubernetes.io/projected/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73-kube-api-access-z677q\") on node \"crc\" DevicePath \"\"" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.917454 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.918053 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.961527 4813 generic.go:334] "Generic (PLEG): container finished" podID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerID="db8ee3c36e1d187dd6d9c80ad1285f04236f28598d31fe54e42c4a1d8f5d9f18" exitCode=0 Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.961598 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.961602 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73","Type":"ContainerDied","Data":"db8ee3c36e1d187dd6d9c80ad1285f04236f28598d31fe54e42c4a1d8f5d9f18"} Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.961661 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f43ce1d8-e6ec-4ba7-8d7f-69584271ea73","Type":"ContainerDied","Data":"03e636ce7258ab3469f130b12431c7dbc85a29dbee65f6ea0abc3dd3021d2dbf"} Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.961679 4813 scope.go:117] "RemoveContainer" containerID="035306304e779493a4416fdcda39a50c1f5584946cc3ac26e2a8c46483c928ef" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.970299 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:06:58 crc kubenswrapper[4813]: I0217 09:06:58.983975 4813 scope.go:117] "RemoveContainer" containerID="b525a603c721ce076cd383eab569e4d8f1046c96a200a6216971dc2bef49d7e7" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.013705 4813 scope.go:117] "RemoveContainer" containerID="db8ee3c36e1d187dd6d9c80ad1285f04236f28598d31fe54e42c4a1d8f5d9f18" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.021236 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.029957 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.030550 4813 scope.go:117] "RemoveContainer" containerID="4d92719704095f2f4e7c85fb9df168496a93fa8ed9c186a8a0460300a47fb926" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.055680 4813 scope.go:117] "RemoveContainer" containerID="035306304e779493a4416fdcda39a50c1f5584946cc3ac26e2a8c46483c928ef" Feb 17 09:06:59 crc kubenswrapper[4813]: E0217 09:06:59.056281 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"035306304e779493a4416fdcda39a50c1f5584946cc3ac26e2a8c46483c928ef\": container with ID starting with 035306304e779493a4416fdcda39a50c1f5584946cc3ac26e2a8c46483c928ef not found: ID does not exist" containerID="035306304e779493a4416fdcda39a50c1f5584946cc3ac26e2a8c46483c928ef" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.056387 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"035306304e779493a4416fdcda39a50c1f5584946cc3ac26e2a8c46483c928ef"} err="failed to get container status \"035306304e779493a4416fdcda39a50c1f5584946cc3ac26e2a8c46483c928ef\": rpc error: code = NotFound desc = could not find container \"035306304e779493a4416fdcda39a50c1f5584946cc3ac26e2a8c46483c928ef\": container with ID starting with 035306304e779493a4416fdcda39a50c1f5584946cc3ac26e2a8c46483c928ef not found: ID does not exist" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.056420 4813 scope.go:117] "RemoveContainer" containerID="b525a603c721ce076cd383eab569e4d8f1046c96a200a6216971dc2bef49d7e7" Feb 17 09:06:59 crc kubenswrapper[4813]: E0217 09:06:59.056708 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b525a603c721ce076cd383eab569e4d8f1046c96a200a6216971dc2bef49d7e7\": container with ID starting with b525a603c721ce076cd383eab569e4d8f1046c96a200a6216971dc2bef49d7e7 not found: ID does not exist" containerID="b525a603c721ce076cd383eab569e4d8f1046c96a200a6216971dc2bef49d7e7" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.056759 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b525a603c721ce076cd383eab569e4d8f1046c96a200a6216971dc2bef49d7e7"} err="failed to get container status \"b525a603c721ce076cd383eab569e4d8f1046c96a200a6216971dc2bef49d7e7\": rpc error: code = NotFound desc = could not find container \"b525a603c721ce076cd383eab569e4d8f1046c96a200a6216971dc2bef49d7e7\": container with ID starting with b525a603c721ce076cd383eab569e4d8f1046c96a200a6216971dc2bef49d7e7 not found: ID does not exist" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.056789 4813 scope.go:117] "RemoveContainer" containerID="db8ee3c36e1d187dd6d9c80ad1285f04236f28598d31fe54e42c4a1d8f5d9f18" Feb 17 09:06:59 crc kubenswrapper[4813]: E0217 09:06:59.057284 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db8ee3c36e1d187dd6d9c80ad1285f04236f28598d31fe54e42c4a1d8f5d9f18\": container with ID starting with db8ee3c36e1d187dd6d9c80ad1285f04236f28598d31fe54e42c4a1d8f5d9f18 not found: ID does not exist" containerID="db8ee3c36e1d187dd6d9c80ad1285f04236f28598d31fe54e42c4a1d8f5d9f18" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.057332 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db8ee3c36e1d187dd6d9c80ad1285f04236f28598d31fe54e42c4a1d8f5d9f18"} err="failed to get container status \"db8ee3c36e1d187dd6d9c80ad1285f04236f28598d31fe54e42c4a1d8f5d9f18\": rpc error: code = NotFound desc = could not find container \"db8ee3c36e1d187dd6d9c80ad1285f04236f28598d31fe54e42c4a1d8f5d9f18\": container with ID starting with db8ee3c36e1d187dd6d9c80ad1285f04236f28598d31fe54e42c4a1d8f5d9f18 not found: ID does not exist" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.057350 4813 scope.go:117] "RemoveContainer" containerID="4d92719704095f2f4e7c85fb9df168496a93fa8ed9c186a8a0460300a47fb926" Feb 17 09:06:59 crc kubenswrapper[4813]: E0217 09:06:59.057610 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d92719704095f2f4e7c85fb9df168496a93fa8ed9c186a8a0460300a47fb926\": container with ID starting with 4d92719704095f2f4e7c85fb9df168496a93fa8ed9c186a8a0460300a47fb926 not found: ID does not exist" containerID="4d92719704095f2f4e7c85fb9df168496a93fa8ed9c186a8a0460300a47fb926" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.057644 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d92719704095f2f4e7c85fb9df168496a93fa8ed9c186a8a0460300a47fb926"} err="failed to get container status \"4d92719704095f2f4e7c85fb9df168496a93fa8ed9c186a8a0460300a47fb926\": rpc error: code = NotFound desc = could not find container \"4d92719704095f2f4e7c85fb9df168496a93fa8ed9c186a8a0460300a47fb926\": container with ID starting with 4d92719704095f2f4e7c85fb9df168496a93fa8ed9c186a8a0460300a47fb926 not found: ID does not exist" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.076475 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:59 crc kubenswrapper[4813]: E0217 09:06:59.076898 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="proxy-httpd" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.076920 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="proxy-httpd" Feb 17 09:06:59 crc kubenswrapper[4813]: E0217 09:06:59.076934 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="ceilometer-central-agent" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.076942 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="ceilometer-central-agent" Feb 17 09:06:59 crc kubenswrapper[4813]: E0217 09:06:59.076957 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="sg-core" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.076966 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="sg-core" Feb 17 09:06:59 crc kubenswrapper[4813]: E0217 09:06:59.076976 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="ceilometer-notification-agent" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.076984 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="ceilometer-notification-agent" Feb 17 09:06:59 crc kubenswrapper[4813]: E0217 09:06:59.077002 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678cdd4d-9d0e-4e2a-b9de-096d52ced581" containerName="registry-server" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.077009 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="678cdd4d-9d0e-4e2a-b9de-096d52ced581" containerName="registry-server" Feb 17 09:06:59 crc kubenswrapper[4813]: E0217 09:06:59.077025 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678cdd4d-9d0e-4e2a-b9de-096d52ced581" containerName="extract-utilities" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.077033 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="678cdd4d-9d0e-4e2a-b9de-096d52ced581" containerName="extract-utilities" Feb 17 09:06:59 crc kubenswrapper[4813]: E0217 09:06:59.077051 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678cdd4d-9d0e-4e2a-b9de-096d52ced581" containerName="extract-content" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.077059 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="678cdd4d-9d0e-4e2a-b9de-096d52ced581" containerName="extract-content" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.077286 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="proxy-httpd" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.077324 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="sg-core" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.077338 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="678cdd4d-9d0e-4e2a-b9de-096d52ced581" containerName="registry-server" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.077351 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="ceilometer-notification-agent" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.077370 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" containerName="ceilometer-central-agent" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.087977 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.091747 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.092767 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.093477 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.100689 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.121514 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f43ce1d8-e6ec-4ba7-8d7f-69584271ea73" path="/var/lib/kubelet/pods/f43ce1d8-e6ec-4ba7-8d7f-69584271ea73/volumes" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.188030 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fffqv\" (UniqueName: \"kubernetes.io/projected/037747e7-40da-4ad4-b721-daccf5fba481-kube-api-access-fffqv\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.188087 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/037747e7-40da-4ad4-b721-daccf5fba481-run-httpd\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.188120 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.188141 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.188190 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-config-data\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.188211 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.188435 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-scripts\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.188450 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/037747e7-40da-4ad4-b721-daccf5fba481-log-httpd\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.289363 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fffqv\" (UniqueName: \"kubernetes.io/projected/037747e7-40da-4ad4-b721-daccf5fba481-kube-api-access-fffqv\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.289424 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/037747e7-40da-4ad4-b721-daccf5fba481-run-httpd\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.289454 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.289475 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.289524 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-config-data\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.289543 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.289563 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-scripts\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.289579 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/037747e7-40da-4ad4-b721-daccf5fba481-log-httpd\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.289982 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/037747e7-40da-4ad4-b721-daccf5fba481-log-httpd\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.290449 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/037747e7-40da-4ad4-b721-daccf5fba481-run-httpd\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.297122 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.301124 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-config-data\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.306654 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.307178 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.309932 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-scripts\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.318903 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fffqv\" (UniqueName: \"kubernetes.io/projected/037747e7-40da-4ad4-b721-daccf5fba481-kube-api-access-fffqv\") pod \"ceilometer-0\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.410228 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.873715 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:06:59 crc kubenswrapper[4813]: I0217 09:06:59.970779 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"037747e7-40da-4ad4-b721-daccf5fba481","Type":"ContainerStarted","Data":"9b8570e1e178fd7c344486359553da7b510592b04d95a307eeb2981b8854dae7"} Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.016980 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.453215 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn"] Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.462716 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-zn7pn"] Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.498562 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher2cf2-account-delete-5qf2p"] Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.499849 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2cf2-account-delete-5qf2p" Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.515073 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher2cf2-account-delete-5qf2p"] Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.561355 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.561548 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="6fecfdd5-e56e-43ca-88f3-2e5922c30afe" containerName="watcher-applier" containerID="cri-o://12246ac818dc3e4e3b9cfa3e4a41acf505e6009daafe4d541ed1eadb4c3bb758" gracePeriod=30 Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.606635 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/486db3df-4a15-4107-984f-88dbc868bd79-operator-scripts\") pod \"watcher2cf2-account-delete-5qf2p\" (UID: \"486db3df-4a15-4107-984f-88dbc868bd79\") " pod="watcher-kuttl-default/watcher2cf2-account-delete-5qf2p" Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.606677 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgnbg\" (UniqueName: \"kubernetes.io/projected/486db3df-4a15-4107-984f-88dbc868bd79-kube-api-access-pgnbg\") pod \"watcher2cf2-account-delete-5qf2p\" (UID: \"486db3df-4a15-4107-984f-88dbc868bd79\") " pod="watcher-kuttl-default/watcher2cf2-account-delete-5qf2p" Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.632418 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.632630 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="12e51adc-ff2c-45a8-85f2-395fbb065e5e" containerName="watcher-decision-engine" containerID="cri-o://b123b1de27be1dd37fd9ca53f45bca5ddd912cfb938c3bf35b5eb51143848120" gracePeriod=30 Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.643471 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.643757 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" containerName="watcher-kuttl-api-log" containerID="cri-o://2b6e96e3f8974e3e73b1d26ba04c0d1eb7a1bc11f4df592338cacd2623f0af87" gracePeriod=30 Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.644008 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" containerName="watcher-api" containerID="cri-o://ca6aee87ce0eb2ef212797e6968c69a386b919e02ad618eb063ded507395d8ef" gracePeriod=30 Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.708016 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/486db3df-4a15-4107-984f-88dbc868bd79-operator-scripts\") pod \"watcher2cf2-account-delete-5qf2p\" (UID: \"486db3df-4a15-4107-984f-88dbc868bd79\") " pod="watcher-kuttl-default/watcher2cf2-account-delete-5qf2p" Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.708068 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgnbg\" (UniqueName: \"kubernetes.io/projected/486db3df-4a15-4107-984f-88dbc868bd79-kube-api-access-pgnbg\") pod \"watcher2cf2-account-delete-5qf2p\" (UID: \"486db3df-4a15-4107-984f-88dbc868bd79\") " pod="watcher-kuttl-default/watcher2cf2-account-delete-5qf2p" Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.709773 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/486db3df-4a15-4107-984f-88dbc868bd79-operator-scripts\") pod \"watcher2cf2-account-delete-5qf2p\" (UID: \"486db3df-4a15-4107-984f-88dbc868bd79\") " pod="watcher-kuttl-default/watcher2cf2-account-delete-5qf2p" Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.732241 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgnbg\" (UniqueName: \"kubernetes.io/projected/486db3df-4a15-4107-984f-88dbc868bd79-kube-api-access-pgnbg\") pod \"watcher2cf2-account-delete-5qf2p\" (UID: \"486db3df-4a15-4107-984f-88dbc868bd79\") " pod="watcher-kuttl-default/watcher2cf2-account-delete-5qf2p" Feb 17 09:07:00 crc kubenswrapper[4813]: I0217 09:07:00.816485 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2cf2-account-delete-5qf2p" Feb 17 09:07:01 crc kubenswrapper[4813]: I0217 09:07:00.998036 4813 generic.go:334] "Generic (PLEG): container finished" podID="fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" containerID="2b6e96e3f8974e3e73b1d26ba04c0d1eb7a1bc11f4df592338cacd2623f0af87" exitCode=143 Feb 17 09:07:01 crc kubenswrapper[4813]: I0217 09:07:00.998398 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce","Type":"ContainerDied","Data":"2b6e96e3f8974e3e73b1d26ba04c0d1eb7a1bc11f4df592338cacd2623f0af87"} Feb 17 09:07:01 crc kubenswrapper[4813]: I0217 09:07:01.002001 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"037747e7-40da-4ad4-b721-daccf5fba481","Type":"ContainerStarted","Data":"2d392ae01e36db932169bf96d8da7be1e9e6a6934cd4f3c402700eab4f8e2940"} Feb 17 09:07:01 crc kubenswrapper[4813]: I0217 09:07:01.142023 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5e6c1c8-a518-48a9-b56a-34fdd8c847af" path="/var/lib/kubelet/pods/c5e6c1c8-a518-48a9-b56a-34fdd8c847af/volumes" Feb 17 09:07:01 crc kubenswrapper[4813]: I0217 09:07:01.150416 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher2cf2-account-delete-5qf2p"] Feb 17 09:07:01 crc kubenswrapper[4813]: I0217 09:07:01.171422 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p287t"] Feb 17 09:07:01 crc kubenswrapper[4813]: E0217 09:07:01.343608 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="12246ac818dc3e4e3b9cfa3e4a41acf505e6009daafe4d541ed1eadb4c3bb758" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:07:01 crc kubenswrapper[4813]: E0217 09:07:01.345341 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="12246ac818dc3e4e3b9cfa3e4a41acf505e6009daafe4d541ed1eadb4c3bb758" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:07:01 crc kubenswrapper[4813]: E0217 09:07:01.355326 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="12246ac818dc3e4e3b9cfa3e4a41acf505e6009daafe4d541ed1eadb4c3bb758" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 17 09:07:01 crc kubenswrapper[4813]: E0217 09:07:01.355378 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="6fecfdd5-e56e-43ca-88f3-2e5922c30afe" containerName="watcher-applier" Feb 17 09:07:01 crc kubenswrapper[4813]: I0217 09:07:01.803677 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.202:9322/\": read tcp 10.217.0.2:38750->10.217.0.202:9322: read: connection reset by peer" Feb 17 09:07:01 crc kubenswrapper[4813]: I0217 09:07:01.804287 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.202:9322/\": read tcp 10.217.0.2:38756->10.217.0.202:9322: read: connection reset by peer" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.022924 4813 generic.go:334] "Generic (PLEG): container finished" podID="486db3df-4a15-4107-984f-88dbc868bd79" containerID="459ca85c1f415e76aed59b529d21945b6b0af1ade69a6b21937f7fa8b64b3820" exitCode=0 Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.023011 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher2cf2-account-delete-5qf2p" event={"ID":"486db3df-4a15-4107-984f-88dbc868bd79","Type":"ContainerDied","Data":"459ca85c1f415e76aed59b529d21945b6b0af1ade69a6b21937f7fa8b64b3820"} Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.023042 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher2cf2-account-delete-5qf2p" event={"ID":"486db3df-4a15-4107-984f-88dbc868bd79","Type":"ContainerStarted","Data":"99fb8abd105af447a94f55fbc981c2a7d8c41edc9d5b899bdb117d8431b4f939"} Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.024971 4813 generic.go:334] "Generic (PLEG): container finished" podID="fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" containerID="ca6aee87ce0eb2ef212797e6968c69a386b919e02ad618eb063ded507395d8ef" exitCode=0 Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.025034 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce","Type":"ContainerDied","Data":"ca6aee87ce0eb2ef212797e6968c69a386b919e02ad618eb063ded507395d8ef"} Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.062674 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"037747e7-40da-4ad4-b721-daccf5fba481","Type":"ContainerStarted","Data":"bb41c80cf04f33a6bb4c64ecef1b3144ce4e6881c624b80e6baea4cfa4d42e94"} Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.062727 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"037747e7-40da-4ad4-b721-daccf5fba481","Type":"ContainerStarted","Data":"191c2a9bcec978e342337f82464346d844bce1b3a167a7d2716910f644e96a40"} Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.062832 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p287t" podUID="cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e" containerName="registry-server" containerID="cri-o://ae60818420fec03706d6a59b300415734d443e67a625f3002c048007165cb523" gracePeriod=2 Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.562098 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.575550 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-logs\") pod \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.575616 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-cert-memcached-mtls\") pod \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.575645 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86nbs\" (UniqueName: \"kubernetes.io/projected/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-kube-api-access-86nbs\") pod \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.575683 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-custom-prometheus-ca\") pod \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.575778 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-config-data\") pod \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.575834 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-combined-ca-bundle\") pod \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\" (UID: \"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce\") " Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.576019 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-logs" (OuterVolumeSpecName: "logs") pod "fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" (UID: "fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.576235 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.591736 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-kube-api-access-86nbs" (OuterVolumeSpecName: "kube-api-access-86nbs") pod "fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" (UID: "fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce"). InnerVolumeSpecName "kube-api-access-86nbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.631765 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" (UID: "fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.634439 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-config-data" (OuterVolumeSpecName: "config-data") pod "fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" (UID: "fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.639279 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.639378 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" (UID: "fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.677380 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qp66\" (UniqueName: \"kubernetes.io/projected/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-kube-api-access-5qp66\") pod \"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e\" (UID: \"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e\") " Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.677589 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-catalog-content\") pod \"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e\" (UID: \"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e\") " Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.677667 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-utilities\") pod \"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e\" (UID: \"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e\") " Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.677960 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86nbs\" (UniqueName: \"kubernetes.io/projected/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-kube-api-access-86nbs\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.677976 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.677986 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.677996 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.679648 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-utilities" (OuterVolumeSpecName: "utilities") pod "cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e" (UID: "cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.684411 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-kube-api-access-5qp66" (OuterVolumeSpecName: "kube-api-access-5qp66") pod "cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e" (UID: "cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e"). InnerVolumeSpecName "kube-api-access-5qp66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.701263 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e" (UID: "cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.707029 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" (UID: "fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.789676 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.789714 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qp66\" (UniqueName: \"kubernetes.io/projected/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-kube-api-access-5qp66\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.789731 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:02 crc kubenswrapper[4813]: I0217 09:07:02.789740 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.075369 4813 generic.go:334] "Generic (PLEG): container finished" podID="cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e" containerID="ae60818420fec03706d6a59b300415734d443e67a625f3002c048007165cb523" exitCode=0 Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.075442 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p287t" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.075464 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p287t" event={"ID":"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e","Type":"ContainerDied","Data":"ae60818420fec03706d6a59b300415734d443e67a625f3002c048007165cb523"} Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.075508 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p287t" event={"ID":"cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e","Type":"ContainerDied","Data":"d06a74a33479803be5257beb2c81a54585905c65fe735838d3d20015762d3490"} Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.075529 4813 scope.go:117] "RemoveContainer" containerID="ae60818420fec03706d6a59b300415734d443e67a625f3002c048007165cb523" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.085481 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.085549 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce","Type":"ContainerDied","Data":"b2dfce26e6bd73c0f99ca1052429b45137339456431381ee116f074f8f7bd90b"} Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.265536 4813 scope.go:117] "RemoveContainer" containerID="7bdbee95059c9cb9f8247a540dafff3e7c2af5f9b3b77158ee6c6f3a717125da" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.272451 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p287t"] Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.302830 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p287t"] Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.311146 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.316997 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.323352 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.331591 4813 scope.go:117] "RemoveContainer" containerID="a2c9500705dbb3d0609f24e6bcbb76c7050854c9701b12579c6857b80ff8c9d9" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.363922 4813 scope.go:117] "RemoveContainer" containerID="ae60818420fec03706d6a59b300415734d443e67a625f3002c048007165cb523" Feb 17 09:07:03 crc kubenswrapper[4813]: E0217 09:07:03.364590 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae60818420fec03706d6a59b300415734d443e67a625f3002c048007165cb523\": container with ID starting with ae60818420fec03706d6a59b300415734d443e67a625f3002c048007165cb523 not found: ID does not exist" containerID="ae60818420fec03706d6a59b300415734d443e67a625f3002c048007165cb523" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.364629 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae60818420fec03706d6a59b300415734d443e67a625f3002c048007165cb523"} err="failed to get container status \"ae60818420fec03706d6a59b300415734d443e67a625f3002c048007165cb523\": rpc error: code = NotFound desc = could not find container \"ae60818420fec03706d6a59b300415734d443e67a625f3002c048007165cb523\": container with ID starting with ae60818420fec03706d6a59b300415734d443e67a625f3002c048007165cb523 not found: ID does not exist" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.364654 4813 scope.go:117] "RemoveContainer" containerID="7bdbee95059c9cb9f8247a540dafff3e7c2af5f9b3b77158ee6c6f3a717125da" Feb 17 09:07:03 crc kubenswrapper[4813]: E0217 09:07:03.365029 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bdbee95059c9cb9f8247a540dafff3e7c2af5f9b3b77158ee6c6f3a717125da\": container with ID starting with 7bdbee95059c9cb9f8247a540dafff3e7c2af5f9b3b77158ee6c6f3a717125da not found: ID does not exist" containerID="7bdbee95059c9cb9f8247a540dafff3e7c2af5f9b3b77158ee6c6f3a717125da" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.365052 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bdbee95059c9cb9f8247a540dafff3e7c2af5f9b3b77158ee6c6f3a717125da"} err="failed to get container status \"7bdbee95059c9cb9f8247a540dafff3e7c2af5f9b3b77158ee6c6f3a717125da\": rpc error: code = NotFound desc = could not find container \"7bdbee95059c9cb9f8247a540dafff3e7c2af5f9b3b77158ee6c6f3a717125da\": container with ID starting with 7bdbee95059c9cb9f8247a540dafff3e7c2af5f9b3b77158ee6c6f3a717125da not found: ID does not exist" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.365065 4813 scope.go:117] "RemoveContainer" containerID="a2c9500705dbb3d0609f24e6bcbb76c7050854c9701b12579c6857b80ff8c9d9" Feb 17 09:07:03 crc kubenswrapper[4813]: E0217 09:07:03.365515 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c9500705dbb3d0609f24e6bcbb76c7050854c9701b12579c6857b80ff8c9d9\": container with ID starting with a2c9500705dbb3d0609f24e6bcbb76c7050854c9701b12579c6857b80ff8c9d9 not found: ID does not exist" containerID="a2c9500705dbb3d0609f24e6bcbb76c7050854c9701b12579c6857b80ff8c9d9" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.365542 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c9500705dbb3d0609f24e6bcbb76c7050854c9701b12579c6857b80ff8c9d9"} err="failed to get container status \"a2c9500705dbb3d0609f24e6bcbb76c7050854c9701b12579c6857b80ff8c9d9\": rpc error: code = NotFound desc = could not find container \"a2c9500705dbb3d0609f24e6bcbb76c7050854c9701b12579c6857b80ff8c9d9\": container with ID starting with a2c9500705dbb3d0609f24e6bcbb76c7050854c9701b12579c6857b80ff8c9d9 not found: ID does not exist" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.365554 4813 scope.go:117] "RemoveContainer" containerID="ca6aee87ce0eb2ef212797e6968c69a386b919e02ad618eb063ded507395d8ef" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.388700 4813 scope.go:117] "RemoveContainer" containerID="2b6e96e3f8974e3e73b1d26ba04c0d1eb7a1bc11f4df592338cacd2623f0af87" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.560105 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2cf2-account-delete-5qf2p" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.612199 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/486db3df-4a15-4107-984f-88dbc868bd79-operator-scripts\") pod \"486db3df-4a15-4107-984f-88dbc868bd79\" (UID: \"486db3df-4a15-4107-984f-88dbc868bd79\") " Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.612361 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgnbg\" (UniqueName: \"kubernetes.io/projected/486db3df-4a15-4107-984f-88dbc868bd79-kube-api-access-pgnbg\") pod \"486db3df-4a15-4107-984f-88dbc868bd79\" (UID: \"486db3df-4a15-4107-984f-88dbc868bd79\") " Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.612976 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/486db3df-4a15-4107-984f-88dbc868bd79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "486db3df-4a15-4107-984f-88dbc868bd79" (UID: "486db3df-4a15-4107-984f-88dbc868bd79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.615346 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486db3df-4a15-4107-984f-88dbc868bd79-kube-api-access-pgnbg" (OuterVolumeSpecName: "kube-api-access-pgnbg") pod "486db3df-4a15-4107-984f-88dbc868bd79" (UID: "486db3df-4a15-4107-984f-88dbc868bd79"). InnerVolumeSpecName "kube-api-access-pgnbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.618715 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.713301 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-cert-memcached-mtls\") pod \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.713373 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-logs\") pod \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.713397 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-combined-ca-bundle\") pod \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.713472 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m2sf\" (UniqueName: \"kubernetes.io/projected/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-kube-api-access-6m2sf\") pod \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.713500 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-config-data\") pod \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\" (UID: \"6fecfdd5-e56e-43ca-88f3-2e5922c30afe\") " Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.713843 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/486db3df-4a15-4107-984f-88dbc868bd79-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.713854 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgnbg\" (UniqueName: \"kubernetes.io/projected/486db3df-4a15-4107-984f-88dbc868bd79-kube-api-access-pgnbg\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.713917 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-logs" (OuterVolumeSpecName: "logs") pod "6fecfdd5-e56e-43ca-88f3-2e5922c30afe" (UID: "6fecfdd5-e56e-43ca-88f3-2e5922c30afe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.716043 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-kube-api-access-6m2sf" (OuterVolumeSpecName: "kube-api-access-6m2sf") pod "6fecfdd5-e56e-43ca-88f3-2e5922c30afe" (UID: "6fecfdd5-e56e-43ca-88f3-2e5922c30afe"). InnerVolumeSpecName "kube-api-access-6m2sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.737210 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fecfdd5-e56e-43ca-88f3-2e5922c30afe" (UID: "6fecfdd5-e56e-43ca-88f3-2e5922c30afe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.758088 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-config-data" (OuterVolumeSpecName: "config-data") pod "6fecfdd5-e56e-43ca-88f3-2e5922c30afe" (UID: "6fecfdd5-e56e-43ca-88f3-2e5922c30afe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.775558 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "6fecfdd5-e56e-43ca-88f3-2e5922c30afe" (UID: "6fecfdd5-e56e-43ca-88f3-2e5922c30afe"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.815694 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.815748 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.815762 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.815773 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m2sf\" (UniqueName: \"kubernetes.io/projected/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-kube-api-access-6m2sf\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:03 crc kubenswrapper[4813]: I0217 09:07:03.815786 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fecfdd5-e56e-43ca-88f3-2e5922c30afe-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.099846 4813 generic.go:334] "Generic (PLEG): container finished" podID="6fecfdd5-e56e-43ca-88f3-2e5922c30afe" containerID="12246ac818dc3e4e3b9cfa3e4a41acf505e6009daafe4d541ed1eadb4c3bb758" exitCode=0 Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.099923 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"6fecfdd5-e56e-43ca-88f3-2e5922c30afe","Type":"ContainerDied","Data":"12246ac818dc3e4e3b9cfa3e4a41acf505e6009daafe4d541ed1eadb4c3bb758"} Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.099937 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.099956 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"6fecfdd5-e56e-43ca-88f3-2e5922c30afe","Type":"ContainerDied","Data":"3015319753ef5e472e1017f0d01bd0bda2d8297ad2f7d0feb04b09a5445f0790"} Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.099997 4813 scope.go:117] "RemoveContainer" containerID="12246ac818dc3e4e3b9cfa3e4a41acf505e6009daafe4d541ed1eadb4c3bb758" Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.107716 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"037747e7-40da-4ad4-b721-daccf5fba481","Type":"ContainerStarted","Data":"d3e2839c537b6de17959ef7f2304d59df8d3f5094f2bb71997bcba7bf1cca496"} Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.108002 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="037747e7-40da-4ad4-b721-daccf5fba481" containerName="ceilometer-central-agent" containerID="cri-o://2d392ae01e36db932169bf96d8da7be1e9e6a6934cd4f3c402700eab4f8e2940" gracePeriod=30 Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.108413 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="037747e7-40da-4ad4-b721-daccf5fba481" containerName="proxy-httpd" containerID="cri-o://d3e2839c537b6de17959ef7f2304d59df8d3f5094f2bb71997bcba7bf1cca496" gracePeriod=30 Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.108434 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.108548 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="037747e7-40da-4ad4-b721-daccf5fba481" containerName="sg-core" containerID="cri-o://bb41c80cf04f33a6bb4c64ecef1b3144ce4e6881c624b80e6baea4cfa4d42e94" gracePeriod=30 Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.108643 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="037747e7-40da-4ad4-b721-daccf5fba481" containerName="ceilometer-notification-agent" containerID="cri-o://191c2a9bcec978e342337f82464346d844bce1b3a167a7d2716910f644e96a40" gracePeriod=30 Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.128572 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2cf2-account-delete-5qf2p" Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.130608 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher2cf2-account-delete-5qf2p" event={"ID":"486db3df-4a15-4107-984f-88dbc868bd79","Type":"ContainerDied","Data":"99fb8abd105af447a94f55fbc981c2a7d8c41edc9d5b899bdb117d8431b4f939"} Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.130799 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99fb8abd105af447a94f55fbc981c2a7d8c41edc9d5b899bdb117d8431b4f939" Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.187898 4813 scope.go:117] "RemoveContainer" containerID="12246ac818dc3e4e3b9cfa3e4a41acf505e6009daafe4d541ed1eadb4c3bb758" Feb 17 09:07:04 crc kubenswrapper[4813]: E0217 09:07:04.188516 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12246ac818dc3e4e3b9cfa3e4a41acf505e6009daafe4d541ed1eadb4c3bb758\": container with ID starting with 12246ac818dc3e4e3b9cfa3e4a41acf505e6009daafe4d541ed1eadb4c3bb758 not found: ID does not exist" containerID="12246ac818dc3e4e3b9cfa3e4a41acf505e6009daafe4d541ed1eadb4c3bb758" Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.188552 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12246ac818dc3e4e3b9cfa3e4a41acf505e6009daafe4d541ed1eadb4c3bb758"} err="failed to get container status \"12246ac818dc3e4e3b9cfa3e4a41acf505e6009daafe4d541ed1eadb4c3bb758\": rpc error: code = NotFound desc = could not find container \"12246ac818dc3e4e3b9cfa3e4a41acf505e6009daafe4d541ed1eadb4c3bb758\": container with ID starting with 12246ac818dc3e4e3b9cfa3e4a41acf505e6009daafe4d541ed1eadb4c3bb758 not found: ID does not exist" Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.196634 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.98015361 podStartE2EDuration="5.196618245s" podCreationTimestamp="2026-02-17 09:06:59 +0000 UTC" firstStartedPulling="2026-02-17 09:06:59.878774305 +0000 UTC m=+1567.539535528" lastFinishedPulling="2026-02-17 09:07:03.09523894 +0000 UTC m=+1570.756000163" observedRunningTime="2026-02-17 09:07:04.158584417 +0000 UTC m=+1571.819345640" watchObservedRunningTime="2026-02-17 09:07:04.196618245 +0000 UTC m=+1571.857379488" Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.198581 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:07:04 crc kubenswrapper[4813]: I0217 09:07:04.210023 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.166594 4813 generic.go:334] "Generic (PLEG): container finished" podID="037747e7-40da-4ad4-b721-daccf5fba481" containerID="d3e2839c537b6de17959ef7f2304d59df8d3f5094f2bb71997bcba7bf1cca496" exitCode=0 Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.166946 4813 generic.go:334] "Generic (PLEG): container finished" podID="037747e7-40da-4ad4-b721-daccf5fba481" containerID="bb41c80cf04f33a6bb4c64ecef1b3144ce4e6881c624b80e6baea4cfa4d42e94" exitCode=2 Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.166959 4813 generic.go:334] "Generic (PLEG): container finished" podID="037747e7-40da-4ad4-b721-daccf5fba481" containerID="191c2a9bcec978e342337f82464346d844bce1b3a167a7d2716910f644e96a40" exitCode=0 Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.166970 4813 generic.go:334] "Generic (PLEG): container finished" podID="037747e7-40da-4ad4-b721-daccf5fba481" containerID="2d392ae01e36db932169bf96d8da7be1e9e6a6934cd4f3c402700eab4f8e2940" exitCode=0 Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.172975 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fecfdd5-e56e-43ca-88f3-2e5922c30afe" path="/var/lib/kubelet/pods/6fecfdd5-e56e-43ca-88f3-2e5922c30afe/volumes" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.173587 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e" path="/var/lib/kubelet/pods/cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e/volumes" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.174574 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" path="/var/lib/kubelet/pods/fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce/volumes" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.175765 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"037747e7-40da-4ad4-b721-daccf5fba481","Type":"ContainerDied","Data":"d3e2839c537b6de17959ef7f2304d59df8d3f5094f2bb71997bcba7bf1cca496"} Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.175800 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"037747e7-40da-4ad4-b721-daccf5fba481","Type":"ContainerDied","Data":"bb41c80cf04f33a6bb4c64ecef1b3144ce4e6881c624b80e6baea4cfa4d42e94"} Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.175842 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"037747e7-40da-4ad4-b721-daccf5fba481","Type":"ContainerDied","Data":"191c2a9bcec978e342337f82464346d844bce1b3a167a7d2716910f644e96a40"} Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.176489 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"037747e7-40da-4ad4-b721-daccf5fba481","Type":"ContainerDied","Data":"2d392ae01e36db932169bf96d8da7be1e9e6a6934cd4f3c402700eab4f8e2940"} Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.363474 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.464198 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-config-data\") pod \"037747e7-40da-4ad4-b721-daccf5fba481\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.464294 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-ceilometer-tls-certs\") pod \"037747e7-40da-4ad4-b721-daccf5fba481\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.464355 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-scripts\") pod \"037747e7-40da-4ad4-b721-daccf5fba481\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.464392 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/037747e7-40da-4ad4-b721-daccf5fba481-run-httpd\") pod \"037747e7-40da-4ad4-b721-daccf5fba481\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.464437 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-combined-ca-bundle\") pod \"037747e7-40da-4ad4-b721-daccf5fba481\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.464480 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/037747e7-40da-4ad4-b721-daccf5fba481-log-httpd\") pod \"037747e7-40da-4ad4-b721-daccf5fba481\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.464513 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-sg-core-conf-yaml\") pod \"037747e7-40da-4ad4-b721-daccf5fba481\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.464539 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fffqv\" (UniqueName: \"kubernetes.io/projected/037747e7-40da-4ad4-b721-daccf5fba481-kube-api-access-fffqv\") pod \"037747e7-40da-4ad4-b721-daccf5fba481\" (UID: \"037747e7-40da-4ad4-b721-daccf5fba481\") " Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.465070 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/037747e7-40da-4ad4-b721-daccf5fba481-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "037747e7-40da-4ad4-b721-daccf5fba481" (UID: "037747e7-40da-4ad4-b721-daccf5fba481"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.465346 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/037747e7-40da-4ad4-b721-daccf5fba481-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.465421 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/037747e7-40da-4ad4-b721-daccf5fba481-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "037747e7-40da-4ad4-b721-daccf5fba481" (UID: "037747e7-40da-4ad4-b721-daccf5fba481"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.469229 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/037747e7-40da-4ad4-b721-daccf5fba481-kube-api-access-fffqv" (OuterVolumeSpecName: "kube-api-access-fffqv") pod "037747e7-40da-4ad4-b721-daccf5fba481" (UID: "037747e7-40da-4ad4-b721-daccf5fba481"). InnerVolumeSpecName "kube-api-access-fffqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.470580 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-scripts" (OuterVolumeSpecName: "scripts") pod "037747e7-40da-4ad4-b721-daccf5fba481" (UID: "037747e7-40da-4ad4-b721-daccf5fba481"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.530557 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "037747e7-40da-4ad4-b721-daccf5fba481" (UID: "037747e7-40da-4ad4-b721-daccf5fba481"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.538789 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-wltvx"] Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.538957 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "037747e7-40da-4ad4-b721-daccf5fba481" (UID: "037747e7-40da-4ad4-b721-daccf5fba481"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.546559 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-wltvx"] Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.552142 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher2cf2-account-delete-5qf2p"] Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.560154 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj"] Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.566807 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher2cf2-account-delete-5qf2p"] Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.568852 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/037747e7-40da-4ad4-b721-daccf5fba481-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.569273 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.569398 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fffqv\" (UniqueName: \"kubernetes.io/projected/037747e7-40da-4ad4-b721-daccf5fba481-kube-api-access-fffqv\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.569462 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.569525 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.575625 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-2cf2-account-create-update-c8xqj"] Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.594527 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "037747e7-40da-4ad4-b721-daccf5fba481" (UID: "037747e7-40da-4ad4-b721-daccf5fba481"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.609886 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-n9g4n"] Feb 17 09:07:05 crc kubenswrapper[4813]: E0217 09:07:05.610180 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" containerName="watcher-api" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.610194 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" containerName="watcher-api" Feb 17 09:07:05 crc kubenswrapper[4813]: E0217 09:07:05.610204 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" containerName="watcher-kuttl-api-log" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.610210 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" containerName="watcher-kuttl-api-log" Feb 17 09:07:05 crc kubenswrapper[4813]: E0217 09:07:05.610222 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fecfdd5-e56e-43ca-88f3-2e5922c30afe" containerName="watcher-applier" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.610228 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fecfdd5-e56e-43ca-88f3-2e5922c30afe" containerName="watcher-applier" Feb 17 09:07:05 crc kubenswrapper[4813]: E0217 09:07:05.610238 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037747e7-40da-4ad4-b721-daccf5fba481" containerName="proxy-httpd" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.610243 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="037747e7-40da-4ad4-b721-daccf5fba481" containerName="proxy-httpd" Feb 17 09:07:05 crc kubenswrapper[4813]: E0217 09:07:05.610253 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e" containerName="extract-utilities" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.610259 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e" containerName="extract-utilities" Feb 17 09:07:05 crc kubenswrapper[4813]: E0217 09:07:05.610268 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037747e7-40da-4ad4-b721-daccf5fba481" containerName="ceilometer-notification-agent" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.610273 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="037747e7-40da-4ad4-b721-daccf5fba481" containerName="ceilometer-notification-agent" Feb 17 09:07:05 crc kubenswrapper[4813]: E0217 09:07:05.610285 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037747e7-40da-4ad4-b721-daccf5fba481" containerName="sg-core" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.610290 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="037747e7-40da-4ad4-b721-daccf5fba481" containerName="sg-core" Feb 17 09:07:05 crc kubenswrapper[4813]: E0217 09:07:05.614530 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037747e7-40da-4ad4-b721-daccf5fba481" containerName="ceilometer-central-agent" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.614576 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="037747e7-40da-4ad4-b721-daccf5fba481" containerName="ceilometer-central-agent" Feb 17 09:07:05 crc kubenswrapper[4813]: E0217 09:07:05.614612 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e" containerName="registry-server" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.614621 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e" containerName="registry-server" Feb 17 09:07:05 crc kubenswrapper[4813]: E0217 09:07:05.614649 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e" containerName="extract-content" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.614658 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e" containerName="extract-content" Feb 17 09:07:05 crc kubenswrapper[4813]: E0217 09:07:05.614673 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486db3df-4a15-4107-984f-88dbc868bd79" containerName="mariadb-account-delete" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.614681 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="486db3df-4a15-4107-984f-88dbc868bd79" containerName="mariadb-account-delete" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.614998 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fecfdd5-e56e-43ca-88f3-2e5922c30afe" containerName="watcher-applier" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.615021 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="037747e7-40da-4ad4-b721-daccf5fba481" containerName="ceilometer-notification-agent" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.615032 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="037747e7-40da-4ad4-b721-daccf5fba481" containerName="ceilometer-central-agent" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.615050 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="037747e7-40da-4ad4-b721-daccf5fba481" containerName="proxy-httpd" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.615060 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="037747e7-40da-4ad4-b721-daccf5fba481" containerName="sg-core" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.615075 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" containerName="watcher-api" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.615084 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc77f8b9-3a51-44e8-a56e-d87cf4a1c4ce" containerName="watcher-kuttl-api-log" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.615094 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="486db3df-4a15-4107-984f-88dbc868bd79" containerName="mariadb-account-delete" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.615104 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="cadb58ec-d8e5-4f81-b0fb-5233d5c31d3e" containerName="registry-server" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.615841 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-n9g4n" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.629485 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-config-data" (OuterVolumeSpecName: "config-data") pod "037747e7-40da-4ad4-b721-daccf5fba481" (UID: "037747e7-40da-4ad4-b721-daccf5fba481"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.638343 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-n9g4n"] Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.671475 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckmcv\" (UniqueName: \"kubernetes.io/projected/4c5db388-49f3-42fa-839d-bf0eaefa3576-kube-api-access-ckmcv\") pod \"watcher-db-create-n9g4n\" (UID: \"4c5db388-49f3-42fa-839d-bf0eaefa3576\") " pod="watcher-kuttl-default/watcher-db-create-n9g4n" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.671675 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5db388-49f3-42fa-839d-bf0eaefa3576-operator-scripts\") pod \"watcher-db-create-n9g4n\" (UID: \"4c5db388-49f3-42fa-839d-bf0eaefa3576\") " pod="watcher-kuttl-default/watcher-db-create-n9g4n" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.671793 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.671816 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/037747e7-40da-4ad4-b721-daccf5fba481-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.721152 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-z7nnf"] Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.722050 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-z7nnf" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.730347 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-z7nnf"] Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.731302 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.772917 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckmcv\" (UniqueName: \"kubernetes.io/projected/4c5db388-49f3-42fa-839d-bf0eaefa3576-kube-api-access-ckmcv\") pod \"watcher-db-create-n9g4n\" (UID: \"4c5db388-49f3-42fa-839d-bf0eaefa3576\") " pod="watcher-kuttl-default/watcher-db-create-n9g4n" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.773005 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5db388-49f3-42fa-839d-bf0eaefa3576-operator-scripts\") pod \"watcher-db-create-n9g4n\" (UID: \"4c5db388-49f3-42fa-839d-bf0eaefa3576\") " pod="watcher-kuttl-default/watcher-db-create-n9g4n" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.773081 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/908296ce-c930-4c42-aff4-dbd27fe4c613-operator-scripts\") pod \"watcher-test-account-create-update-z7nnf\" (UID: \"908296ce-c930-4c42-aff4-dbd27fe4c613\") " pod="watcher-kuttl-default/watcher-test-account-create-update-z7nnf" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.773135 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbbrk\" (UniqueName: \"kubernetes.io/projected/908296ce-c930-4c42-aff4-dbd27fe4c613-kube-api-access-zbbrk\") pod \"watcher-test-account-create-update-z7nnf\" (UID: \"908296ce-c930-4c42-aff4-dbd27fe4c613\") " pod="watcher-kuttl-default/watcher-test-account-create-update-z7nnf" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.773946 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5db388-49f3-42fa-839d-bf0eaefa3576-operator-scripts\") pod \"watcher-db-create-n9g4n\" (UID: \"4c5db388-49f3-42fa-839d-bf0eaefa3576\") " pod="watcher-kuttl-default/watcher-db-create-n9g4n" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.789606 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckmcv\" (UniqueName: \"kubernetes.io/projected/4c5db388-49f3-42fa-839d-bf0eaefa3576-kube-api-access-ckmcv\") pod \"watcher-db-create-n9g4n\" (UID: \"4c5db388-49f3-42fa-839d-bf0eaefa3576\") " pod="watcher-kuttl-default/watcher-db-create-n9g4n" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.874476 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/908296ce-c930-4c42-aff4-dbd27fe4c613-operator-scripts\") pod \"watcher-test-account-create-update-z7nnf\" (UID: \"908296ce-c930-4c42-aff4-dbd27fe4c613\") " pod="watcher-kuttl-default/watcher-test-account-create-update-z7nnf" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.874842 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbbrk\" (UniqueName: \"kubernetes.io/projected/908296ce-c930-4c42-aff4-dbd27fe4c613-kube-api-access-zbbrk\") pod \"watcher-test-account-create-update-z7nnf\" (UID: \"908296ce-c930-4c42-aff4-dbd27fe4c613\") " pod="watcher-kuttl-default/watcher-test-account-create-update-z7nnf" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.875356 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/908296ce-c930-4c42-aff4-dbd27fe4c613-operator-scripts\") pod \"watcher-test-account-create-update-z7nnf\" (UID: \"908296ce-c930-4c42-aff4-dbd27fe4c613\") " pod="watcher-kuttl-default/watcher-test-account-create-update-z7nnf" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.898519 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbbrk\" (UniqueName: \"kubernetes.io/projected/908296ce-c930-4c42-aff4-dbd27fe4c613-kube-api-access-zbbrk\") pod \"watcher-test-account-create-update-z7nnf\" (UID: \"908296ce-c930-4c42-aff4-dbd27fe4c613\") " pod="watcher-kuttl-default/watcher-test-account-create-update-z7nnf" Feb 17 09:07:05 crc kubenswrapper[4813]: I0217 09:07:05.932575 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-n9g4n" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.041279 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-z7nnf" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.180419 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"037747e7-40da-4ad4-b721-daccf5fba481","Type":"ContainerDied","Data":"9b8570e1e178fd7c344486359553da7b510592b04d95a307eeb2981b8854dae7"} Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.180787 4813 scope.go:117] "RemoveContainer" containerID="d3e2839c537b6de17959ef7f2304d59df8d3f5094f2bb71997bcba7bf1cca496" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.180470 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.202766 4813 scope.go:117] "RemoveContainer" containerID="bb41c80cf04f33a6bb4c64ecef1b3144ce4e6881c624b80e6baea4cfa4d42e94" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.223058 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.223275 4813 scope.go:117] "RemoveContainer" containerID="191c2a9bcec978e342337f82464346d844bce1b3a167a7d2716910f644e96a40" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.231439 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.257488 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.259342 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.261513 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.262059 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.262175 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.263588 4813 scope.go:117] "RemoveContainer" containerID="2d392ae01e36db932169bf96d8da7be1e9e6a6934cd4f3c402700eab4f8e2940" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.267687 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.386407 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.386459 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.386575 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.386609 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8xj7\" (UniqueName: \"kubernetes.io/projected/d17a4c77-8c43-43fe-b573-1c242d9ab664-kube-api-access-w8xj7\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.386678 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d17a4c77-8c43-43fe-b573-1c242d9ab664-run-httpd\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.386713 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-scripts\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.386728 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d17a4c77-8c43-43fe-b573-1c242d9ab664-log-httpd\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.386745 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-config-data\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: W0217 09:07:06.461390 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c5db388_49f3_42fa_839d_bf0eaefa3576.slice/crio-1a877e948744c7b6bd8ea6d714ac365ff47028e35872468c624f8adef0cb361b WatchSource:0}: Error finding container 1a877e948744c7b6bd8ea6d714ac365ff47028e35872468c624f8adef0cb361b: Status 404 returned error can't find the container with id 1a877e948744c7b6bd8ea6d714ac365ff47028e35872468c624f8adef0cb361b Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.472661 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-n9g4n"] Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.487778 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.487829 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8xj7\" (UniqueName: \"kubernetes.io/projected/d17a4c77-8c43-43fe-b573-1c242d9ab664-kube-api-access-w8xj7\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.487876 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d17a4c77-8c43-43fe-b573-1c242d9ab664-run-httpd\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.487899 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-scripts\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.487913 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d17a4c77-8c43-43fe-b573-1c242d9ab664-log-httpd\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.487931 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-config-data\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.487954 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.487975 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.488427 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d17a4c77-8c43-43fe-b573-1c242d9ab664-log-httpd\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.488868 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d17a4c77-8c43-43fe-b573-1c242d9ab664-run-httpd\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.492959 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.492971 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.493009 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.495103 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-config-data\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.496078 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-scripts\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.507026 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8xj7\" (UniqueName: \"kubernetes.io/projected/d17a4c77-8c43-43fe-b573-1c242d9ab664-kube-api-access-w8xj7\") pod \"ceilometer-0\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.616215 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:06 crc kubenswrapper[4813]: I0217 09:07:06.629166 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-z7nnf"] Feb 17 09:07:06 crc kubenswrapper[4813]: W0217 09:07:06.632666 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod908296ce_c930_4c42_aff4_dbd27fe4c613.slice/crio-5e17d57f2977a281d3cf24b59fba46f9f97510271c6095f7bf4c2a6d84a1a279 WatchSource:0}: Error finding container 5e17d57f2977a281d3cf24b59fba46f9f97510271c6095f7bf4c2a6d84a1a279: Status 404 returned error can't find the container with id 5e17d57f2977a281d3cf24b59fba46f9f97510271c6095f7bf4c2a6d84a1a279 Feb 17 09:07:07 crc kubenswrapper[4813]: I0217 09:07:07.078641 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:07:07 crc kubenswrapper[4813]: W0217 09:07:07.089169 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd17a4c77_8c43_43fe_b573_1c242d9ab664.slice/crio-c492003eefbf42dcc2d71efe2e46fee13010447857b4540312a89007d211d8d4 WatchSource:0}: Error finding container c492003eefbf42dcc2d71efe2e46fee13010447857b4540312a89007d211d8d4: Status 404 returned error can't find the container with id c492003eefbf42dcc2d71efe2e46fee13010447857b4540312a89007d211d8d4 Feb 17 09:07:07 crc kubenswrapper[4813]: I0217 09:07:07.121778 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="037747e7-40da-4ad4-b721-daccf5fba481" path="/var/lib/kubelet/pods/037747e7-40da-4ad4-b721-daccf5fba481/volumes" Feb 17 09:07:07 crc kubenswrapper[4813]: I0217 09:07:07.122664 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24e3b418-2e0b-433a-8dc9-9981dae6bd9e" path="/var/lib/kubelet/pods/24e3b418-2e0b-433a-8dc9-9981dae6bd9e/volumes" Feb 17 09:07:07 crc kubenswrapper[4813]: I0217 09:07:07.123108 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="486db3df-4a15-4107-984f-88dbc868bd79" path="/var/lib/kubelet/pods/486db3df-4a15-4107-984f-88dbc868bd79/volumes" Feb 17 09:07:07 crc kubenswrapper[4813]: I0217 09:07:07.125857 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2010c77-96c8-40e2-8f5d-0501c6b6ae92" path="/var/lib/kubelet/pods/e2010c77-96c8-40e2-8f5d-0501c6b6ae92/volumes" Feb 17 09:07:07 crc kubenswrapper[4813]: I0217 09:07:07.190029 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d17a4c77-8c43-43fe-b573-1c242d9ab664","Type":"ContainerStarted","Data":"c492003eefbf42dcc2d71efe2e46fee13010447857b4540312a89007d211d8d4"} Feb 17 09:07:07 crc kubenswrapper[4813]: I0217 09:07:07.192587 4813 generic.go:334] "Generic (PLEG): container finished" podID="4c5db388-49f3-42fa-839d-bf0eaefa3576" containerID="c44a0bbf4ae0ae342537a299c251554551a8ec522aff52f1b829c1a84958682a" exitCode=0 Feb 17 09:07:07 crc kubenswrapper[4813]: I0217 09:07:07.192674 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-n9g4n" event={"ID":"4c5db388-49f3-42fa-839d-bf0eaefa3576","Type":"ContainerDied","Data":"c44a0bbf4ae0ae342537a299c251554551a8ec522aff52f1b829c1a84958682a"} Feb 17 09:07:07 crc kubenswrapper[4813]: I0217 09:07:07.192705 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-n9g4n" event={"ID":"4c5db388-49f3-42fa-839d-bf0eaefa3576","Type":"ContainerStarted","Data":"1a877e948744c7b6bd8ea6d714ac365ff47028e35872468c624f8adef0cb361b"} Feb 17 09:07:07 crc kubenswrapper[4813]: I0217 09:07:07.194823 4813 generic.go:334] "Generic (PLEG): container finished" podID="908296ce-c930-4c42-aff4-dbd27fe4c613" containerID="d67e4de8cc5397ef02565469276ba188340656e58982f51825a5ab7de71676a3" exitCode=0 Feb 17 09:07:07 crc kubenswrapper[4813]: I0217 09:07:07.194870 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-z7nnf" event={"ID":"908296ce-c930-4c42-aff4-dbd27fe4c613","Type":"ContainerDied","Data":"d67e4de8cc5397ef02565469276ba188340656e58982f51825a5ab7de71676a3"} Feb 17 09:07:07 crc kubenswrapper[4813]: I0217 09:07:07.194896 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-z7nnf" event={"ID":"908296ce-c930-4c42-aff4-dbd27fe4c613","Type":"ContainerStarted","Data":"5e17d57f2977a281d3cf24b59fba46f9f97510271c6095f7bf4c2a6d84a1a279"} Feb 17 09:07:08 crc kubenswrapper[4813]: I0217 09:07:08.205742 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d17a4c77-8c43-43fe-b573-1c242d9ab664","Type":"ContainerStarted","Data":"5dc6e222f972cc1bcb2f4795ed5802dd3deb413a6e9d3659882ad3e299a5786d"} Feb 17 09:07:08 crc kubenswrapper[4813]: I0217 09:07:08.668132 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-z7nnf" Feb 17 09:07:08 crc kubenswrapper[4813]: I0217 09:07:08.673001 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-n9g4n" Feb 17 09:07:08 crc kubenswrapper[4813]: I0217 09:07:08.727414 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbbrk\" (UniqueName: \"kubernetes.io/projected/908296ce-c930-4c42-aff4-dbd27fe4c613-kube-api-access-zbbrk\") pod \"908296ce-c930-4c42-aff4-dbd27fe4c613\" (UID: \"908296ce-c930-4c42-aff4-dbd27fe4c613\") " Feb 17 09:07:08 crc kubenswrapper[4813]: I0217 09:07:08.727794 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckmcv\" (UniqueName: \"kubernetes.io/projected/4c5db388-49f3-42fa-839d-bf0eaefa3576-kube-api-access-ckmcv\") pod \"4c5db388-49f3-42fa-839d-bf0eaefa3576\" (UID: \"4c5db388-49f3-42fa-839d-bf0eaefa3576\") " Feb 17 09:07:08 crc kubenswrapper[4813]: I0217 09:07:08.727985 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/908296ce-c930-4c42-aff4-dbd27fe4c613-operator-scripts\") pod \"908296ce-c930-4c42-aff4-dbd27fe4c613\" (UID: \"908296ce-c930-4c42-aff4-dbd27fe4c613\") " Feb 17 09:07:08 crc kubenswrapper[4813]: I0217 09:07:08.728012 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5db388-49f3-42fa-839d-bf0eaefa3576-operator-scripts\") pod \"4c5db388-49f3-42fa-839d-bf0eaefa3576\" (UID: \"4c5db388-49f3-42fa-839d-bf0eaefa3576\") " Feb 17 09:07:08 crc kubenswrapper[4813]: I0217 09:07:08.728458 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/908296ce-c930-4c42-aff4-dbd27fe4c613-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "908296ce-c930-4c42-aff4-dbd27fe4c613" (UID: "908296ce-c930-4c42-aff4-dbd27fe4c613"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:07:08 crc kubenswrapper[4813]: I0217 09:07:08.728529 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c5db388-49f3-42fa-839d-bf0eaefa3576-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c5db388-49f3-42fa-839d-bf0eaefa3576" (UID: "4c5db388-49f3-42fa-839d-bf0eaefa3576"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:07:08 crc kubenswrapper[4813]: I0217 09:07:08.728873 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/908296ce-c930-4c42-aff4-dbd27fe4c613-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:08 crc kubenswrapper[4813]: I0217 09:07:08.728886 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c5db388-49f3-42fa-839d-bf0eaefa3576-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:08 crc kubenswrapper[4813]: I0217 09:07:08.731504 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5db388-49f3-42fa-839d-bf0eaefa3576-kube-api-access-ckmcv" (OuterVolumeSpecName: "kube-api-access-ckmcv") pod "4c5db388-49f3-42fa-839d-bf0eaefa3576" (UID: "4c5db388-49f3-42fa-839d-bf0eaefa3576"). InnerVolumeSpecName "kube-api-access-ckmcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:07:08 crc kubenswrapper[4813]: I0217 09:07:08.731595 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908296ce-c930-4c42-aff4-dbd27fe4c613-kube-api-access-zbbrk" (OuterVolumeSpecName: "kube-api-access-zbbrk") pod "908296ce-c930-4c42-aff4-dbd27fe4c613" (UID: "908296ce-c930-4c42-aff4-dbd27fe4c613"). InnerVolumeSpecName "kube-api-access-zbbrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:07:08 crc kubenswrapper[4813]: I0217 09:07:08.830145 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbbrk\" (UniqueName: \"kubernetes.io/projected/908296ce-c930-4c42-aff4-dbd27fe4c613-kube-api-access-zbbrk\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:08 crc kubenswrapper[4813]: I0217 09:07:08.830176 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckmcv\" (UniqueName: \"kubernetes.io/projected/4c5db388-49f3-42fa-839d-bf0eaefa3576-kube-api-access-ckmcv\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.115500 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:07:09 crc kubenswrapper[4813]: E0217 09:07:09.115749 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.216430 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d17a4c77-8c43-43fe-b573-1c242d9ab664","Type":"ContainerStarted","Data":"748242e06cf7af6f0e8e1a0c23be4a8b8a78475fd07ceb215a59d8bef1eb45e2"} Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.216477 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d17a4c77-8c43-43fe-b573-1c242d9ab664","Type":"ContainerStarted","Data":"fff4cc1989e4d43e8e8778f1f1f75da970c93cb3532f5707e0bb2b7ada3d60dc"} Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.219092 4813 generic.go:334] "Generic (PLEG): container finished" podID="12e51adc-ff2c-45a8-85f2-395fbb065e5e" containerID="b123b1de27be1dd37fd9ca53f45bca5ddd912cfb938c3bf35b5eb51143848120" exitCode=0 Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.219158 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"12e51adc-ff2c-45a8-85f2-395fbb065e5e","Type":"ContainerDied","Data":"b123b1de27be1dd37fd9ca53f45bca5ddd912cfb938c3bf35b5eb51143848120"} Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.221339 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-n9g4n" event={"ID":"4c5db388-49f3-42fa-839d-bf0eaefa3576","Type":"ContainerDied","Data":"1a877e948744c7b6bd8ea6d714ac365ff47028e35872468c624f8adef0cb361b"} Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.221363 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a877e948744c7b6bd8ea6d714ac365ff47028e35872468c624f8adef0cb361b" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.221384 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-n9g4n" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.225263 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-z7nnf" event={"ID":"908296ce-c930-4c42-aff4-dbd27fe4c613","Type":"ContainerDied","Data":"5e17d57f2977a281d3cf24b59fba46f9f97510271c6095f7bf4c2a6d84a1a279"} Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.225478 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e17d57f2977a281d3cf24b59fba46f9f97510271c6095f7bf4c2a6d84a1a279" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.225564 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-z7nnf" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.383334 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.440183 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkdwr\" (UniqueName: \"kubernetes.io/projected/12e51adc-ff2c-45a8-85f2-395fbb065e5e-kube-api-access-gkdwr\") pod \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.440247 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12e51adc-ff2c-45a8-85f2-395fbb065e5e-logs\") pod \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.440337 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-custom-prometheus-ca\") pod \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.440377 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-combined-ca-bundle\") pod \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.440434 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-config-data\") pod \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.440511 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-cert-memcached-mtls\") pod \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\" (UID: \"12e51adc-ff2c-45a8-85f2-395fbb065e5e\") " Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.451900 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12e51adc-ff2c-45a8-85f2-395fbb065e5e-logs" (OuterVolumeSpecName: "logs") pod "12e51adc-ff2c-45a8-85f2-395fbb065e5e" (UID: "12e51adc-ff2c-45a8-85f2-395fbb065e5e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.467609 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e51adc-ff2c-45a8-85f2-395fbb065e5e-kube-api-access-gkdwr" (OuterVolumeSpecName: "kube-api-access-gkdwr") pod "12e51adc-ff2c-45a8-85f2-395fbb065e5e" (UID: "12e51adc-ff2c-45a8-85f2-395fbb065e5e"). InnerVolumeSpecName "kube-api-access-gkdwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.505781 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "12e51adc-ff2c-45a8-85f2-395fbb065e5e" (UID: "12e51adc-ff2c-45a8-85f2-395fbb065e5e"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.516518 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12e51adc-ff2c-45a8-85f2-395fbb065e5e" (UID: "12e51adc-ff2c-45a8-85f2-395fbb065e5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.548254 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.548478 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.548569 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkdwr\" (UniqueName: \"kubernetes.io/projected/12e51adc-ff2c-45a8-85f2-395fbb065e5e-kube-api-access-gkdwr\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.548628 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12e51adc-ff2c-45a8-85f2-395fbb065e5e-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.563425 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "12e51adc-ff2c-45a8-85f2-395fbb065e5e" (UID: "12e51adc-ff2c-45a8-85f2-395fbb065e5e"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.579875 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-config-data" (OuterVolumeSpecName: "config-data") pod "12e51adc-ff2c-45a8-85f2-395fbb065e5e" (UID: "12e51adc-ff2c-45a8-85f2-395fbb065e5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.649661 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:09 crc kubenswrapper[4813]: I0217 09:07:09.649707 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/12e51adc-ff2c-45a8-85f2-395fbb065e5e-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:10 crc kubenswrapper[4813]: I0217 09:07:10.235747 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"12e51adc-ff2c-45a8-85f2-395fbb065e5e","Type":"ContainerDied","Data":"64bb94a1948fcbdf0443e3b804d581b02cbd71c68a91a277d4a112aef4cdd6d2"} Feb 17 09:07:10 crc kubenswrapper[4813]: I0217 09:07:10.236052 4813 scope.go:117] "RemoveContainer" containerID="b123b1de27be1dd37fd9ca53f45bca5ddd912cfb938c3bf35b5eb51143848120" Feb 17 09:07:10 crc kubenswrapper[4813]: I0217 09:07:10.235804 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:10 crc kubenswrapper[4813]: I0217 09:07:10.238850 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d17a4c77-8c43-43fe-b573-1c242d9ab664","Type":"ContainerStarted","Data":"4da4a787e46aee96368e31fcdbce80dc65367da85944c2141cc21d506c438c46"} Feb 17 09:07:10 crc kubenswrapper[4813]: I0217 09:07:10.240014 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:10 crc kubenswrapper[4813]: I0217 09:07:10.273475 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.414685295 podStartE2EDuration="4.273453812s" podCreationTimestamp="2026-02-17 09:07:06 +0000 UTC" firstStartedPulling="2026-02-17 09:07:07.092298538 +0000 UTC m=+1574.753059761" lastFinishedPulling="2026-02-17 09:07:09.951067065 +0000 UTC m=+1577.611828278" observedRunningTime="2026-02-17 09:07:10.267331837 +0000 UTC m=+1577.928093100" watchObservedRunningTime="2026-02-17 09:07:10.273453812 +0000 UTC m=+1577.934215035" Feb 17 09:07:10 crc kubenswrapper[4813]: I0217 09:07:10.293993 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:07:10 crc kubenswrapper[4813]: I0217 09:07:10.299843 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.002046 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw"] Feb 17 09:07:11 crc kubenswrapper[4813]: E0217 09:07:11.002328 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e51adc-ff2c-45a8-85f2-395fbb065e5e" containerName="watcher-decision-engine" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.002344 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e51adc-ff2c-45a8-85f2-395fbb065e5e" containerName="watcher-decision-engine" Feb 17 09:07:11 crc kubenswrapper[4813]: E0217 09:07:11.002372 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908296ce-c930-4c42-aff4-dbd27fe4c613" containerName="mariadb-account-create-update" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.002379 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="908296ce-c930-4c42-aff4-dbd27fe4c613" containerName="mariadb-account-create-update" Feb 17 09:07:11 crc kubenswrapper[4813]: E0217 09:07:11.002386 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5db388-49f3-42fa-839d-bf0eaefa3576" containerName="mariadb-database-create" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.002392 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5db388-49f3-42fa-839d-bf0eaefa3576" containerName="mariadb-database-create" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.002515 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e51adc-ff2c-45a8-85f2-395fbb065e5e" containerName="watcher-decision-engine" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.002530 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c5db388-49f3-42fa-839d-bf0eaefa3576" containerName="mariadb-database-create" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.002542 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="908296ce-c930-4c42-aff4-dbd27fe4c613" containerName="mariadb-account-create-update" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.003022 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.004897 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-c6php" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.005768 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.022712 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw"] Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.080329 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-config-data\") pod \"watcher-kuttl-db-sync-pm9dw\" (UID: \"f511bb8c-5900-4729-927c-4cdf62c78aef\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.080458 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqzhj\" (UniqueName: \"kubernetes.io/projected/f511bb8c-5900-4729-927c-4cdf62c78aef-kube-api-access-kqzhj\") pod \"watcher-kuttl-db-sync-pm9dw\" (UID: \"f511bb8c-5900-4729-927c-4cdf62c78aef\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.080543 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-db-sync-config-data\") pod \"watcher-kuttl-db-sync-pm9dw\" (UID: \"f511bb8c-5900-4729-927c-4cdf62c78aef\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.080684 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-pm9dw\" (UID: \"f511bb8c-5900-4729-927c-4cdf62c78aef\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.120933 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12e51adc-ff2c-45a8-85f2-395fbb065e5e" path="/var/lib/kubelet/pods/12e51adc-ff2c-45a8-85f2-395fbb065e5e/volumes" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.181968 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-pm9dw\" (UID: \"f511bb8c-5900-4729-927c-4cdf62c78aef\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.182090 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-config-data\") pod \"watcher-kuttl-db-sync-pm9dw\" (UID: \"f511bb8c-5900-4729-927c-4cdf62c78aef\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.182134 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqzhj\" (UniqueName: \"kubernetes.io/projected/f511bb8c-5900-4729-927c-4cdf62c78aef-kube-api-access-kqzhj\") pod \"watcher-kuttl-db-sync-pm9dw\" (UID: \"f511bb8c-5900-4729-927c-4cdf62c78aef\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.182154 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-db-sync-config-data\") pod \"watcher-kuttl-db-sync-pm9dw\" (UID: \"f511bb8c-5900-4729-927c-4cdf62c78aef\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.187032 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-config-data\") pod \"watcher-kuttl-db-sync-pm9dw\" (UID: \"f511bb8c-5900-4729-927c-4cdf62c78aef\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.187623 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-pm9dw\" (UID: \"f511bb8c-5900-4729-927c-4cdf62c78aef\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.187737 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-db-sync-config-data\") pod \"watcher-kuttl-db-sync-pm9dw\" (UID: \"f511bb8c-5900-4729-927c-4cdf62c78aef\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.203848 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqzhj\" (UniqueName: \"kubernetes.io/projected/f511bb8c-5900-4729-927c-4cdf62c78aef-kube-api-access-kqzhj\") pod \"watcher-kuttl-db-sync-pm9dw\" (UID: \"f511bb8c-5900-4729-927c-4cdf62c78aef\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.323016 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" Feb 17 09:07:11 crc kubenswrapper[4813]: I0217 09:07:11.813222 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw"] Feb 17 09:07:12 crc kubenswrapper[4813]: I0217 09:07:12.260901 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" event={"ID":"f511bb8c-5900-4729-927c-4cdf62c78aef","Type":"ContainerStarted","Data":"1ab39c2329e9404fe79387f57afbe8402760a160bb0e2023f0ad0e90823bf689"} Feb 17 09:07:12 crc kubenswrapper[4813]: I0217 09:07:12.261251 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" event={"ID":"f511bb8c-5900-4729-927c-4cdf62c78aef","Type":"ContainerStarted","Data":"749798f7ce5276c72a857b3bd2567d406ebc1f5450f6ae4890ef2833556aca9b"} Feb 17 09:07:12 crc kubenswrapper[4813]: I0217 09:07:12.288709 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" podStartSLOduration=2.288692405 podStartE2EDuration="2.288692405s" podCreationTimestamp="2026-02-17 09:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:07:12.280390488 +0000 UTC m=+1579.941151711" watchObservedRunningTime="2026-02-17 09:07:12.288692405 +0000 UTC m=+1579.949453628" Feb 17 09:07:15 crc kubenswrapper[4813]: I0217 09:07:15.303697 4813 generic.go:334] "Generic (PLEG): container finished" podID="f511bb8c-5900-4729-927c-4cdf62c78aef" containerID="1ab39c2329e9404fe79387f57afbe8402760a160bb0e2023f0ad0e90823bf689" exitCode=0 Feb 17 09:07:15 crc kubenswrapper[4813]: I0217 09:07:15.303753 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" event={"ID":"f511bb8c-5900-4729-927c-4cdf62c78aef","Type":"ContainerDied","Data":"1ab39c2329e9404fe79387f57afbe8402760a160bb0e2023f0ad0e90823bf689"} Feb 17 09:07:16 crc kubenswrapper[4813]: I0217 09:07:16.851316 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" Feb 17 09:07:16 crc kubenswrapper[4813]: I0217 09:07:16.891476 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-config-data\") pod \"f511bb8c-5900-4729-927c-4cdf62c78aef\" (UID: \"f511bb8c-5900-4729-927c-4cdf62c78aef\") " Feb 17 09:07:16 crc kubenswrapper[4813]: I0217 09:07:16.891606 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqzhj\" (UniqueName: \"kubernetes.io/projected/f511bb8c-5900-4729-927c-4cdf62c78aef-kube-api-access-kqzhj\") pod \"f511bb8c-5900-4729-927c-4cdf62c78aef\" (UID: \"f511bb8c-5900-4729-927c-4cdf62c78aef\") " Feb 17 09:07:16 crc kubenswrapper[4813]: I0217 09:07:16.891769 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-combined-ca-bundle\") pod \"f511bb8c-5900-4729-927c-4cdf62c78aef\" (UID: \"f511bb8c-5900-4729-927c-4cdf62c78aef\") " Feb 17 09:07:16 crc kubenswrapper[4813]: I0217 09:07:16.891805 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-db-sync-config-data\") pod \"f511bb8c-5900-4729-927c-4cdf62c78aef\" (UID: \"f511bb8c-5900-4729-927c-4cdf62c78aef\") " Feb 17 09:07:16 crc kubenswrapper[4813]: I0217 09:07:16.931662 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f511bb8c-5900-4729-927c-4cdf62c78aef-kube-api-access-kqzhj" (OuterVolumeSpecName: "kube-api-access-kqzhj") pod "f511bb8c-5900-4729-927c-4cdf62c78aef" (UID: "f511bb8c-5900-4729-927c-4cdf62c78aef"). InnerVolumeSpecName "kube-api-access-kqzhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:07:16 crc kubenswrapper[4813]: I0217 09:07:16.943603 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f511bb8c-5900-4729-927c-4cdf62c78aef" (UID: "f511bb8c-5900-4729-927c-4cdf62c78aef"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:16 crc kubenswrapper[4813]: I0217 09:07:16.943810 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f511bb8c-5900-4729-927c-4cdf62c78aef" (UID: "f511bb8c-5900-4729-927c-4cdf62c78aef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:16 crc kubenswrapper[4813]: I0217 09:07:16.979410 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-config-data" (OuterVolumeSpecName: "config-data") pod "f511bb8c-5900-4729-927c-4cdf62c78aef" (UID: "f511bb8c-5900-4729-927c-4cdf62c78aef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:16 crc kubenswrapper[4813]: I0217 09:07:16.995261 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:16 crc kubenswrapper[4813]: I0217 09:07:16.995294 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:16 crc kubenswrapper[4813]: I0217 09:07:16.995316 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f511bb8c-5900-4729-927c-4cdf62c78aef-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:16 crc kubenswrapper[4813]: I0217 09:07:16.995329 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqzhj\" (UniqueName: \"kubernetes.io/projected/f511bb8c-5900-4729-927c-4cdf62c78aef-kube-api-access-kqzhj\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.331499 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" event={"ID":"f511bb8c-5900-4729-927c-4cdf62c78aef","Type":"ContainerDied","Data":"749798f7ce5276c72a857b3bd2567d406ebc1f5450f6ae4890ef2833556aca9b"} Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.331578 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="749798f7ce5276c72a857b3bd2567d406ebc1f5450f6ae4890ef2833556aca9b" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.331580 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.609625 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:07:17 crc kubenswrapper[4813]: E0217 09:07:17.609959 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f511bb8c-5900-4729-927c-4cdf62c78aef" containerName="watcher-kuttl-db-sync" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.609973 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f511bb8c-5900-4729-927c-4cdf62c78aef" containerName="watcher-kuttl-db-sync" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.610106 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f511bb8c-5900-4729-927c-4cdf62c78aef" containerName="watcher-kuttl-db-sync" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.611088 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.614283 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.614286 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-c6php" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.621868 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.624202 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.632117 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.650924 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.681845 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.696047 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.702985 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.705533 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.711290 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.711455 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.711537 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84d61ff1-13a9-4f6b-870a-151ea8de7237-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.711607 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.711686 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.711763 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.711839 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50406224-f182-48b7-b5b6-566a3245c830-logs\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.711920 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.711990 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82llx\" (UniqueName: \"kubernetes.io/projected/50406224-f182-48b7-b5b6-566a3245c830-kube-api-access-82llx\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.712089 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.712163 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.712235 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcqvx\" (UniqueName: \"kubernetes.io/projected/99e7302c-1d38-4fb6-8dfd-031f981519b3-kube-api-access-xcqvx\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.712317 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.712395 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99e7302c-1d38-4fb6-8dfd-031f981519b3-logs\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.712466 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.712552 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.712615 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t79jp\" (UniqueName: \"kubernetes.io/projected/84d61ff1-13a9-4f6b-870a-151ea8de7237-kube-api-access-t79jp\") pod \"watcher-kuttl-applier-0\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.773550 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.774816 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.777605 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.814181 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.814239 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.814342 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.814380 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t79jp\" (UniqueName: \"kubernetes.io/projected/84d61ff1-13a9-4f6b-870a-151ea8de7237-kube-api-access-t79jp\") pod \"watcher-kuttl-applier-0\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.814455 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.814480 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815174 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815209 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84d61ff1-13a9-4f6b-870a-151ea8de7237-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815228 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815250 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815286 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815359 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50406224-f182-48b7-b5b6-566a3245c830-logs\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815380 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw9md\" (UniqueName: \"kubernetes.io/projected/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-kube-api-access-bw9md\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815423 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815447 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815467 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82llx\" (UniqueName: \"kubernetes.io/projected/50406224-f182-48b7-b5b6-566a3245c830-kube-api-access-82llx\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815489 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815506 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815532 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcqvx\" (UniqueName: \"kubernetes.io/projected/99e7302c-1d38-4fb6-8dfd-031f981519b3-kube-api-access-xcqvx\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815550 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815566 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815586 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99e7302c-1d38-4fb6-8dfd-031f981519b3-logs\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.815604 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.816480 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84d61ff1-13a9-4f6b-870a-151ea8de7237-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.821775 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.824807 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.825240 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.825488 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99e7302c-1d38-4fb6-8dfd-031f981519b3-logs\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.826899 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.827466 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.828063 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.828385 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50406224-f182-48b7-b5b6-566a3245c830-logs\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.829520 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.829939 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.830904 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.835026 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.838572 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.839537 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t79jp\" (UniqueName: \"kubernetes.io/projected/84d61ff1-13a9-4f6b-870a-151ea8de7237-kube-api-access-t79jp\") pod \"watcher-kuttl-applier-0\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.839824 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.844185 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82llx\" (UniqueName: \"kubernetes.io/projected/50406224-f182-48b7-b5b6-566a3245c830-kube-api-access-82llx\") pod \"watcher-kuttl-api-0\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.853016 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcqvx\" (UniqueName: \"kubernetes.io/projected/99e7302c-1d38-4fb6-8dfd-031f981519b3-kube-api-access-xcqvx\") pod \"watcher-kuttl-api-1\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.916913 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.916981 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.917006 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.917044 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.917102 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw9md\" (UniqueName: \"kubernetes.io/projected/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-kube-api-access-bw9md\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.917128 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.920171 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.920428 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.922151 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.923663 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.927012 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.931520 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.935604 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw9md\" (UniqueName: \"kubernetes.io/projected/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-kube-api-access-bw9md\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:17 crc kubenswrapper[4813]: I0217 09:07:17.939228 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:18 crc kubenswrapper[4813]: I0217 09:07:18.056689 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:18 crc kubenswrapper[4813]: I0217 09:07:18.221799 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:18 crc kubenswrapper[4813]: W0217 09:07:18.422518 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50406224_f182_48b7_b5b6_566a3245c830.slice/crio-ac5629f38383b72f5db8ec769b70717572ba75c82d69dd8e04f5bdbdd64e01f6 WatchSource:0}: Error finding container ac5629f38383b72f5db8ec769b70717572ba75c82d69dd8e04f5bdbdd64e01f6: Status 404 returned error can't find the container with id ac5629f38383b72f5db8ec769b70717572ba75c82d69dd8e04f5bdbdd64e01f6 Feb 17 09:07:18 crc kubenswrapper[4813]: I0217 09:07:18.428916 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:07:18 crc kubenswrapper[4813]: I0217 09:07:18.532162 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 17 09:07:18 crc kubenswrapper[4813]: W0217 09:07:18.549705 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99e7302c_1d38_4fb6_8dfd_031f981519b3.slice/crio-25f77ce526d5288ed37eda1c084ba90c2591032b1c3930b3837c794aed6449a8 WatchSource:0}: Error finding container 25f77ce526d5288ed37eda1c084ba90c2591032b1c3930b3837c794aed6449a8: Status 404 returned error can't find the container with id 25f77ce526d5288ed37eda1c084ba90c2591032b1c3930b3837c794aed6449a8 Feb 17 09:07:18 crc kubenswrapper[4813]: I0217 09:07:18.646686 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:07:18 crc kubenswrapper[4813]: I0217 09:07:18.772728 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:07:18 crc kubenswrapper[4813]: W0217 09:07:18.778945 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59a5bd53_62f6_4af2_84a0_eb291da8ec2a.slice/crio-7349bab98b0ef835307d96d200f881faeb9dc3df16587debe1e27a485bb319fd WatchSource:0}: Error finding container 7349bab98b0ef835307d96d200f881faeb9dc3df16587debe1e27a485bb319fd: Status 404 returned error can't find the container with id 7349bab98b0ef835307d96d200f881faeb9dc3df16587debe1e27a485bb319fd Feb 17 09:07:19 crc kubenswrapper[4813]: I0217 09:07:19.348954 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"59a5bd53-62f6-4af2-84a0-eb291da8ec2a","Type":"ContainerStarted","Data":"884b557b5dedca9bd30d13ba505e401a1de9492b152835c6d48ba67dc8a15a19"} Feb 17 09:07:19 crc kubenswrapper[4813]: I0217 09:07:19.350043 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"59a5bd53-62f6-4af2-84a0-eb291da8ec2a","Type":"ContainerStarted","Data":"7349bab98b0ef835307d96d200f881faeb9dc3df16587debe1e27a485bb319fd"} Feb 17 09:07:19 crc kubenswrapper[4813]: I0217 09:07:19.352121 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"84d61ff1-13a9-4f6b-870a-151ea8de7237","Type":"ContainerStarted","Data":"8ea91d09b2065f46d9555294568f9b8e038ed5b68829b3eb2ae4bb447d81a0f9"} Feb 17 09:07:19 crc kubenswrapper[4813]: I0217 09:07:19.352166 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"84d61ff1-13a9-4f6b-870a-151ea8de7237","Type":"ContainerStarted","Data":"be5bdd4e8602dc718f6b3fc2df1f6819c31262fad7f45d9bc0454ce4f49239a1"} Feb 17 09:07:19 crc kubenswrapper[4813]: I0217 09:07:19.355579 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"50406224-f182-48b7-b5b6-566a3245c830","Type":"ContainerStarted","Data":"87f047bb6aefe6c1b48a4be6ce2b350d2feaa30c771e483896f28843f6b2422e"} Feb 17 09:07:19 crc kubenswrapper[4813]: I0217 09:07:19.355705 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"50406224-f182-48b7-b5b6-566a3245c830","Type":"ContainerStarted","Data":"95470e319c4b5981df2319bc24d3e9265728a609a439fd07e37a655b1c0fb746"} Feb 17 09:07:19 crc kubenswrapper[4813]: I0217 09:07:19.355801 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"50406224-f182-48b7-b5b6-566a3245c830","Type":"ContainerStarted","Data":"ac5629f38383b72f5db8ec769b70717572ba75c82d69dd8e04f5bdbdd64e01f6"} Feb 17 09:07:19 crc kubenswrapper[4813]: I0217 09:07:19.356978 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:19 crc kubenswrapper[4813]: I0217 09:07:19.360605 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"99e7302c-1d38-4fb6-8dfd-031f981519b3","Type":"ContainerStarted","Data":"31afd182241f1021ffd510f36c6a1e971214d3fcea52a3122f7dabb0087a24a2"} Feb 17 09:07:19 crc kubenswrapper[4813]: I0217 09:07:19.360790 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"99e7302c-1d38-4fb6-8dfd-031f981519b3","Type":"ContainerStarted","Data":"6e32c236829a1bd6424bea929b2cdbd0d7a619b4bc7b20d7755692b56550433c"} Feb 17 09:07:19 crc kubenswrapper[4813]: I0217 09:07:19.360867 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"99e7302c-1d38-4fb6-8dfd-031f981519b3","Type":"ContainerStarted","Data":"25f77ce526d5288ed37eda1c084ba90c2591032b1c3930b3837c794aed6449a8"} Feb 17 09:07:19 crc kubenswrapper[4813]: I0217 09:07:19.368602 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:19 crc kubenswrapper[4813]: I0217 09:07:19.370853 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.370839347 podStartE2EDuration="2.370839347s" podCreationTimestamp="2026-02-17 09:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:07:19.368750308 +0000 UTC m=+1587.029511531" watchObservedRunningTime="2026-02-17 09:07:19.370839347 +0000 UTC m=+1587.031600560" Feb 17 09:07:19 crc kubenswrapper[4813]: I0217 09:07:19.399554 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=2.399486667 podStartE2EDuration="2.399486667s" podCreationTimestamp="2026-02-17 09:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:07:19.39294411 +0000 UTC m=+1587.053705343" watchObservedRunningTime="2026-02-17 09:07:19.399486667 +0000 UTC m=+1587.060247910" Feb 17 09:07:19 crc kubenswrapper[4813]: I0217 09:07:19.413720 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.413704534 podStartE2EDuration="2.413704534s" podCreationTimestamp="2026-02-17 09:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:07:19.410841232 +0000 UTC m=+1587.071602455" watchObservedRunningTime="2026-02-17 09:07:19.413704534 +0000 UTC m=+1587.074465757" Feb 17 09:07:19 crc kubenswrapper[4813]: I0217 09:07:19.429688 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.429667651 podStartE2EDuration="2.429667651s" podCreationTimestamp="2026-02-17 09:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:07:19.428731475 +0000 UTC m=+1587.089492698" watchObservedRunningTime="2026-02-17 09:07:19.429667651 +0000 UTC m=+1587.090428874" Feb 17 09:07:21 crc kubenswrapper[4813]: I0217 09:07:21.375964 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 09:07:21 crc kubenswrapper[4813]: I0217 09:07:21.376148 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 09:07:21 crc kubenswrapper[4813]: I0217 09:07:21.588445 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:21 crc kubenswrapper[4813]: I0217 09:07:21.648428 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:22 crc kubenswrapper[4813]: I0217 09:07:22.111535 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:07:22 crc kubenswrapper[4813]: E0217 09:07:22.111938 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:07:22 crc kubenswrapper[4813]: I0217 09:07:22.385592 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qnsfj"] Feb 17 09:07:22 crc kubenswrapper[4813]: I0217 09:07:22.388283 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:22 crc kubenswrapper[4813]: I0217 09:07:22.397496 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qnsfj"] Feb 17 09:07:22 crc kubenswrapper[4813]: I0217 09:07:22.511508 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df2de77-c07e-40b6-8b35-c508fd940200-utilities\") pod \"community-operators-qnsfj\" (UID: \"9df2de77-c07e-40b6-8b35-c508fd940200\") " pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:22 crc kubenswrapper[4813]: I0217 09:07:22.511566 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5672\" (UniqueName: \"kubernetes.io/projected/9df2de77-c07e-40b6-8b35-c508fd940200-kube-api-access-w5672\") pod \"community-operators-qnsfj\" (UID: \"9df2de77-c07e-40b6-8b35-c508fd940200\") " pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:22 crc kubenswrapper[4813]: I0217 09:07:22.511789 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df2de77-c07e-40b6-8b35-c508fd940200-catalog-content\") pod \"community-operators-qnsfj\" (UID: \"9df2de77-c07e-40b6-8b35-c508fd940200\") " pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:22 crc kubenswrapper[4813]: I0217 09:07:22.612653 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df2de77-c07e-40b6-8b35-c508fd940200-utilities\") pod \"community-operators-qnsfj\" (UID: \"9df2de77-c07e-40b6-8b35-c508fd940200\") " pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:22 crc kubenswrapper[4813]: I0217 09:07:22.612692 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5672\" (UniqueName: \"kubernetes.io/projected/9df2de77-c07e-40b6-8b35-c508fd940200-kube-api-access-w5672\") pod \"community-operators-qnsfj\" (UID: \"9df2de77-c07e-40b6-8b35-c508fd940200\") " pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:22 crc kubenswrapper[4813]: I0217 09:07:22.612780 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df2de77-c07e-40b6-8b35-c508fd940200-catalog-content\") pod \"community-operators-qnsfj\" (UID: \"9df2de77-c07e-40b6-8b35-c508fd940200\") " pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:22 crc kubenswrapper[4813]: I0217 09:07:22.613241 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df2de77-c07e-40b6-8b35-c508fd940200-catalog-content\") pod \"community-operators-qnsfj\" (UID: \"9df2de77-c07e-40b6-8b35-c508fd940200\") " pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:22 crc kubenswrapper[4813]: I0217 09:07:22.613472 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df2de77-c07e-40b6-8b35-c508fd940200-utilities\") pod \"community-operators-qnsfj\" (UID: \"9df2de77-c07e-40b6-8b35-c508fd940200\") " pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:22 crc kubenswrapper[4813]: I0217 09:07:22.633201 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5672\" (UniqueName: \"kubernetes.io/projected/9df2de77-c07e-40b6-8b35-c508fd940200-kube-api-access-w5672\") pod \"community-operators-qnsfj\" (UID: \"9df2de77-c07e-40b6-8b35-c508fd940200\") " pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:22 crc kubenswrapper[4813]: I0217 09:07:22.768073 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:22 crc kubenswrapper[4813]: I0217 09:07:22.928439 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:22 crc kubenswrapper[4813]: I0217 09:07:22.941300 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:23 crc kubenswrapper[4813]: I0217 09:07:23.057003 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:23 crc kubenswrapper[4813]: I0217 09:07:23.320927 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qnsfj"] Feb 17 09:07:23 crc kubenswrapper[4813]: W0217 09:07:23.330950 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9df2de77_c07e_40b6_8b35_c508fd940200.slice/crio-8721c4e8a51504da5242b11d9b89b986dbc9ed6151bfdc6689274739ada348f9 WatchSource:0}: Error finding container 8721c4e8a51504da5242b11d9b89b986dbc9ed6151bfdc6689274739ada348f9: Status 404 returned error can't find the container with id 8721c4e8a51504da5242b11d9b89b986dbc9ed6151bfdc6689274739ada348f9 Feb 17 09:07:23 crc kubenswrapper[4813]: I0217 09:07:23.397944 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qnsfj" event={"ID":"9df2de77-c07e-40b6-8b35-c508fd940200","Type":"ContainerStarted","Data":"8721c4e8a51504da5242b11d9b89b986dbc9ed6151bfdc6689274739ada348f9"} Feb 17 09:07:24 crc kubenswrapper[4813]: I0217 09:07:24.412599 4813 generic.go:334] "Generic (PLEG): container finished" podID="9df2de77-c07e-40b6-8b35-c508fd940200" containerID="88c5c3fe5e40d53d48515ea2d67d576611de9f21059ab18765331c07093302ff" exitCode=0 Feb 17 09:07:24 crc kubenswrapper[4813]: I0217 09:07:24.412969 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qnsfj" event={"ID":"9df2de77-c07e-40b6-8b35-c508fd940200","Type":"ContainerDied","Data":"88c5c3fe5e40d53d48515ea2d67d576611de9f21059ab18765331c07093302ff"} Feb 17 09:07:25 crc kubenswrapper[4813]: I0217 09:07:25.424061 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qnsfj" event={"ID":"9df2de77-c07e-40b6-8b35-c508fd940200","Type":"ContainerStarted","Data":"4e87f5d14da21ee7b44275a6d0c4a7f19785c4f681d862beafadbb39795328b9"} Feb 17 09:07:26 crc kubenswrapper[4813]: I0217 09:07:26.434039 4813 generic.go:334] "Generic (PLEG): container finished" podID="9df2de77-c07e-40b6-8b35-c508fd940200" containerID="4e87f5d14da21ee7b44275a6d0c4a7f19785c4f681d862beafadbb39795328b9" exitCode=0 Feb 17 09:07:26 crc kubenswrapper[4813]: I0217 09:07:26.434219 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qnsfj" event={"ID":"9df2de77-c07e-40b6-8b35-c508fd940200","Type":"ContainerDied","Data":"4e87f5d14da21ee7b44275a6d0c4a7f19785c4f681d862beafadbb39795328b9"} Feb 17 09:07:27 crc kubenswrapper[4813]: I0217 09:07:27.449094 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qnsfj" event={"ID":"9df2de77-c07e-40b6-8b35-c508fd940200","Type":"ContainerStarted","Data":"ec227f859b48ca566c07e077314d48c99be8d6674904bea6ea54b86401e74146"} Feb 17 09:07:27 crc kubenswrapper[4813]: I0217 09:07:27.475534 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qnsfj" podStartSLOduration=3.052590016 podStartE2EDuration="5.475514838s" podCreationTimestamp="2026-02-17 09:07:22 +0000 UTC" firstStartedPulling="2026-02-17 09:07:24.41548758 +0000 UTC m=+1592.076248843" lastFinishedPulling="2026-02-17 09:07:26.838412432 +0000 UTC m=+1594.499173665" observedRunningTime="2026-02-17 09:07:27.467976902 +0000 UTC m=+1595.128738165" watchObservedRunningTime="2026-02-17 09:07:27.475514838 +0000 UTC m=+1595.136276071" Feb 17 09:07:27 crc kubenswrapper[4813]: I0217 09:07:27.927750 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:27 crc kubenswrapper[4813]: I0217 09:07:27.934615 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:27 crc kubenswrapper[4813]: I0217 09:07:27.939843 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:27 crc kubenswrapper[4813]: I0217 09:07:27.946523 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:28 crc kubenswrapper[4813]: I0217 09:07:28.057069 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:28 crc kubenswrapper[4813]: I0217 09:07:28.090934 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:28 crc kubenswrapper[4813]: I0217 09:07:28.223050 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:28 crc kubenswrapper[4813]: I0217 09:07:28.266526 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:28 crc kubenswrapper[4813]: I0217 09:07:28.456285 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:28 crc kubenswrapper[4813]: I0217 09:07:28.461240 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:07:28 crc kubenswrapper[4813]: I0217 09:07:28.462174 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:07:28 crc kubenswrapper[4813]: I0217 09:07:28.497883 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:07:28 crc kubenswrapper[4813]: I0217 09:07:28.511709 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:07:30 crc kubenswrapper[4813]: I0217 09:07:30.506469 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:07:30 crc kubenswrapper[4813]: I0217 09:07:30.507357 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="ceilometer-central-agent" containerID="cri-o://5dc6e222f972cc1bcb2f4795ed5802dd3deb413a6e9d3659882ad3e299a5786d" gracePeriod=30 Feb 17 09:07:30 crc kubenswrapper[4813]: I0217 09:07:30.507808 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="proxy-httpd" containerID="cri-o://4da4a787e46aee96368e31fcdbce80dc65367da85944c2141cc21d506c438c46" gracePeriod=30 Feb 17 09:07:30 crc kubenswrapper[4813]: I0217 09:07:30.507862 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="ceilometer-notification-agent" containerID="cri-o://fff4cc1989e4d43e8e8778f1f1f75da970c93cb3532f5707e0bb2b7ada3d60dc" gracePeriod=30 Feb 17 09:07:30 crc kubenswrapper[4813]: I0217 09:07:30.507968 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="sg-core" containerID="cri-o://748242e06cf7af6f0e8e1a0c23be4a8b8a78475fd07ceb215a59d8bef1eb45e2" gracePeriod=30 Feb 17 09:07:30 crc kubenswrapper[4813]: I0217 09:07:30.521436 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.210:3000/\": EOF" Feb 17 09:07:31 crc kubenswrapper[4813]: I0217 09:07:31.482549 4813 generic.go:334] "Generic (PLEG): container finished" podID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerID="4da4a787e46aee96368e31fcdbce80dc65367da85944c2141cc21d506c438c46" exitCode=0 Feb 17 09:07:31 crc kubenswrapper[4813]: I0217 09:07:31.482884 4813 generic.go:334] "Generic (PLEG): container finished" podID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerID="748242e06cf7af6f0e8e1a0c23be4a8b8a78475fd07ceb215a59d8bef1eb45e2" exitCode=2 Feb 17 09:07:31 crc kubenswrapper[4813]: I0217 09:07:31.482903 4813 generic.go:334] "Generic (PLEG): container finished" podID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerID="5dc6e222f972cc1bcb2f4795ed5802dd3deb413a6e9d3659882ad3e299a5786d" exitCode=0 Feb 17 09:07:31 crc kubenswrapper[4813]: I0217 09:07:31.482728 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d17a4c77-8c43-43fe-b573-1c242d9ab664","Type":"ContainerDied","Data":"4da4a787e46aee96368e31fcdbce80dc65367da85944c2141cc21d506c438c46"} Feb 17 09:07:31 crc kubenswrapper[4813]: I0217 09:07:31.482954 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d17a4c77-8c43-43fe-b573-1c242d9ab664","Type":"ContainerDied","Data":"748242e06cf7af6f0e8e1a0c23be4a8b8a78475fd07ceb215a59d8bef1eb45e2"} Feb 17 09:07:31 crc kubenswrapper[4813]: I0217 09:07:31.482978 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d17a4c77-8c43-43fe-b573-1c242d9ab664","Type":"ContainerDied","Data":"5dc6e222f972cc1bcb2f4795ed5802dd3deb413a6e9d3659882ad3e299a5786d"} Feb 17 09:07:32 crc kubenswrapper[4813]: I0217 09:07:32.769345 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:32 crc kubenswrapper[4813]: I0217 09:07:32.769405 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:32 crc kubenswrapper[4813]: I0217 09:07:32.834626 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:33 crc kubenswrapper[4813]: I0217 09:07:33.553835 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.338456 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.442977 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-combined-ca-bundle\") pod \"d17a4c77-8c43-43fe-b573-1c242d9ab664\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.443248 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-scripts\") pod \"d17a4c77-8c43-43fe-b573-1c242d9ab664\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.443341 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d17a4c77-8c43-43fe-b573-1c242d9ab664-log-httpd\") pod \"d17a4c77-8c43-43fe-b573-1c242d9ab664\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.443447 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-ceilometer-tls-certs\") pod \"d17a4c77-8c43-43fe-b573-1c242d9ab664\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.443638 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d17a4c77-8c43-43fe-b573-1c242d9ab664-run-httpd\") pod \"d17a4c77-8c43-43fe-b573-1c242d9ab664\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.444042 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-sg-core-conf-yaml\") pod \"d17a4c77-8c43-43fe-b573-1c242d9ab664\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.444431 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8xj7\" (UniqueName: \"kubernetes.io/projected/d17a4c77-8c43-43fe-b573-1c242d9ab664-kube-api-access-w8xj7\") pod \"d17a4c77-8c43-43fe-b573-1c242d9ab664\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.443808 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d17a4c77-8c43-43fe-b573-1c242d9ab664-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d17a4c77-8c43-43fe-b573-1c242d9ab664" (UID: "d17a4c77-8c43-43fe-b573-1c242d9ab664"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.443982 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d17a4c77-8c43-43fe-b573-1c242d9ab664-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d17a4c77-8c43-43fe-b573-1c242d9ab664" (UID: "d17a4c77-8c43-43fe-b573-1c242d9ab664"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.444609 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-config-data\") pod \"d17a4c77-8c43-43fe-b573-1c242d9ab664\" (UID: \"d17a4c77-8c43-43fe-b573-1c242d9ab664\") " Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.445002 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d17a4c77-8c43-43fe-b573-1c242d9ab664-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.445063 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d17a4c77-8c43-43fe-b573-1c242d9ab664-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.472441 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-scripts" (OuterVolumeSpecName: "scripts") pod "d17a4c77-8c43-43fe-b573-1c242d9ab664" (UID: "d17a4c77-8c43-43fe-b573-1c242d9ab664"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.479490 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d17a4c77-8c43-43fe-b573-1c242d9ab664-kube-api-access-w8xj7" (OuterVolumeSpecName: "kube-api-access-w8xj7") pod "d17a4c77-8c43-43fe-b573-1c242d9ab664" (UID: "d17a4c77-8c43-43fe-b573-1c242d9ab664"). InnerVolumeSpecName "kube-api-access-w8xj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.515089 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d17a4c77-8c43-43fe-b573-1c242d9ab664" (UID: "d17a4c77-8c43-43fe-b573-1c242d9ab664"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.520912 4813 generic.go:334] "Generic (PLEG): container finished" podID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerID="fff4cc1989e4d43e8e8778f1f1f75da970c93cb3532f5707e0bb2b7ada3d60dc" exitCode=0 Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.520949 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d17a4c77-8c43-43fe-b573-1c242d9ab664","Type":"ContainerDied","Data":"fff4cc1989e4d43e8e8778f1f1f75da970c93cb3532f5707e0bb2b7ada3d60dc"} Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.520975 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d17a4c77-8c43-43fe-b573-1c242d9ab664","Type":"ContainerDied","Data":"c492003eefbf42dcc2d71efe2e46fee13010447857b4540312a89007d211d8d4"} Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.520992 4813 scope.go:117] "RemoveContainer" containerID="4da4a787e46aee96368e31fcdbce80dc65367da85944c2141cc21d506c438c46" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.521102 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.538992 4813 scope.go:117] "RemoveContainer" containerID="748242e06cf7af6f0e8e1a0c23be4a8b8a78475fd07ceb215a59d8bef1eb45e2" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.539823 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d17a4c77-8c43-43fe-b573-1c242d9ab664" (UID: "d17a4c77-8c43-43fe-b573-1c242d9ab664"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.547071 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8xj7\" (UniqueName: \"kubernetes.io/projected/d17a4c77-8c43-43fe-b573-1c242d9ab664-kube-api-access-w8xj7\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.547105 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.547117 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.547129 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.548653 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d17a4c77-8c43-43fe-b573-1c242d9ab664" (UID: "d17a4c77-8c43-43fe-b573-1c242d9ab664"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.555420 4813 scope.go:117] "RemoveContainer" containerID="fff4cc1989e4d43e8e8778f1f1f75da970c93cb3532f5707e0bb2b7ada3d60dc" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.572522 4813 scope.go:117] "RemoveContainer" containerID="5dc6e222f972cc1bcb2f4795ed5802dd3deb413a6e9d3659882ad3e299a5786d" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.589868 4813 scope.go:117] "RemoveContainer" containerID="4da4a787e46aee96368e31fcdbce80dc65367da85944c2141cc21d506c438c46" Feb 17 09:07:35 crc kubenswrapper[4813]: E0217 09:07:35.590267 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4da4a787e46aee96368e31fcdbce80dc65367da85944c2141cc21d506c438c46\": container with ID starting with 4da4a787e46aee96368e31fcdbce80dc65367da85944c2141cc21d506c438c46 not found: ID does not exist" containerID="4da4a787e46aee96368e31fcdbce80dc65367da85944c2141cc21d506c438c46" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.590298 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4da4a787e46aee96368e31fcdbce80dc65367da85944c2141cc21d506c438c46"} err="failed to get container status \"4da4a787e46aee96368e31fcdbce80dc65367da85944c2141cc21d506c438c46\": rpc error: code = NotFound desc = could not find container \"4da4a787e46aee96368e31fcdbce80dc65367da85944c2141cc21d506c438c46\": container with ID starting with 4da4a787e46aee96368e31fcdbce80dc65367da85944c2141cc21d506c438c46 not found: ID does not exist" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.590337 4813 scope.go:117] "RemoveContainer" containerID="748242e06cf7af6f0e8e1a0c23be4a8b8a78475fd07ceb215a59d8bef1eb45e2" Feb 17 09:07:35 crc kubenswrapper[4813]: E0217 09:07:35.590748 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"748242e06cf7af6f0e8e1a0c23be4a8b8a78475fd07ceb215a59d8bef1eb45e2\": container with ID starting with 748242e06cf7af6f0e8e1a0c23be4a8b8a78475fd07ceb215a59d8bef1eb45e2 not found: ID does not exist" containerID="748242e06cf7af6f0e8e1a0c23be4a8b8a78475fd07ceb215a59d8bef1eb45e2" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.590782 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"748242e06cf7af6f0e8e1a0c23be4a8b8a78475fd07ceb215a59d8bef1eb45e2"} err="failed to get container status \"748242e06cf7af6f0e8e1a0c23be4a8b8a78475fd07ceb215a59d8bef1eb45e2\": rpc error: code = NotFound desc = could not find container \"748242e06cf7af6f0e8e1a0c23be4a8b8a78475fd07ceb215a59d8bef1eb45e2\": container with ID starting with 748242e06cf7af6f0e8e1a0c23be4a8b8a78475fd07ceb215a59d8bef1eb45e2 not found: ID does not exist" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.590797 4813 scope.go:117] "RemoveContainer" containerID="fff4cc1989e4d43e8e8778f1f1f75da970c93cb3532f5707e0bb2b7ada3d60dc" Feb 17 09:07:35 crc kubenswrapper[4813]: E0217 09:07:35.591139 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fff4cc1989e4d43e8e8778f1f1f75da970c93cb3532f5707e0bb2b7ada3d60dc\": container with ID starting with fff4cc1989e4d43e8e8778f1f1f75da970c93cb3532f5707e0bb2b7ada3d60dc not found: ID does not exist" containerID="fff4cc1989e4d43e8e8778f1f1f75da970c93cb3532f5707e0bb2b7ada3d60dc" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.591208 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fff4cc1989e4d43e8e8778f1f1f75da970c93cb3532f5707e0bb2b7ada3d60dc"} err="failed to get container status \"fff4cc1989e4d43e8e8778f1f1f75da970c93cb3532f5707e0bb2b7ada3d60dc\": rpc error: code = NotFound desc = could not find container \"fff4cc1989e4d43e8e8778f1f1f75da970c93cb3532f5707e0bb2b7ada3d60dc\": container with ID starting with fff4cc1989e4d43e8e8778f1f1f75da970c93cb3532f5707e0bb2b7ada3d60dc not found: ID does not exist" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.591263 4813 scope.go:117] "RemoveContainer" containerID="5dc6e222f972cc1bcb2f4795ed5802dd3deb413a6e9d3659882ad3e299a5786d" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.593228 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-config-data" (OuterVolumeSpecName: "config-data") pod "d17a4c77-8c43-43fe-b573-1c242d9ab664" (UID: "d17a4c77-8c43-43fe-b573-1c242d9ab664"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:07:35 crc kubenswrapper[4813]: E0217 09:07:35.593239 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dc6e222f972cc1bcb2f4795ed5802dd3deb413a6e9d3659882ad3e299a5786d\": container with ID starting with 5dc6e222f972cc1bcb2f4795ed5802dd3deb413a6e9d3659882ad3e299a5786d not found: ID does not exist" containerID="5dc6e222f972cc1bcb2f4795ed5802dd3deb413a6e9d3659882ad3e299a5786d" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.593295 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dc6e222f972cc1bcb2f4795ed5802dd3deb413a6e9d3659882ad3e299a5786d"} err="failed to get container status \"5dc6e222f972cc1bcb2f4795ed5802dd3deb413a6e9d3659882ad3e299a5786d\": rpc error: code = NotFound desc = could not find container \"5dc6e222f972cc1bcb2f4795ed5802dd3deb413a6e9d3659882ad3e299a5786d\": container with ID starting with 5dc6e222f972cc1bcb2f4795ed5802dd3deb413a6e9d3659882ad3e299a5786d not found: ID does not exist" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.648810 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.648846 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17a4c77-8c43-43fe-b573-1c242d9ab664-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.879043 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.893565 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.913257 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:07:35 crc kubenswrapper[4813]: E0217 09:07:35.913814 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="ceilometer-central-agent" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.913839 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="ceilometer-central-agent" Feb 17 09:07:35 crc kubenswrapper[4813]: E0217 09:07:35.913860 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="proxy-httpd" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.913900 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="proxy-httpd" Feb 17 09:07:35 crc kubenswrapper[4813]: E0217 09:07:35.913923 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="sg-core" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.913934 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="sg-core" Feb 17 09:07:35 crc kubenswrapper[4813]: E0217 09:07:35.913958 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="ceilometer-notification-agent" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.913968 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="ceilometer-notification-agent" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.917493 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="ceilometer-central-agent" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.917530 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="sg-core" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.917548 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="proxy-httpd" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.917586 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" containerName="ceilometer-notification-agent" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.920160 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.934772 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.934848 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.935092 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:07:35 crc kubenswrapper[4813]: I0217 09:07:35.973464 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.064639 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-run-httpd\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.064730 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.064850 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-log-httpd\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.064916 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.064947 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-config-data\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.064990 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcctg\" (UniqueName: \"kubernetes.io/projected/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-kube-api-access-qcctg\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.065034 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.065193 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-scripts\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.166329 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-log-httpd\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.166388 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.166411 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-config-data\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.166434 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcctg\" (UniqueName: \"kubernetes.io/projected/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-kube-api-access-qcctg\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.166457 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.166521 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-scripts\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.166590 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-run-httpd\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.166621 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.167833 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-log-httpd\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.168161 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-run-httpd\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.171701 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-scripts\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.173162 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.174890 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.176101 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.180818 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-config-data\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.202096 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcctg\" (UniqueName: \"kubernetes.io/projected/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-kube-api-access-qcctg\") pod \"ceilometer-0\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.297113 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.372831 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qnsfj"] Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.373173 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qnsfj" podUID="9df2de77-c07e-40b6-8b35-c508fd940200" containerName="registry-server" containerID="cri-o://ec227f859b48ca566c07e077314d48c99be8d6674904bea6ea54b86401e74146" gracePeriod=2 Feb 17 09:07:36 crc kubenswrapper[4813]: I0217 09:07:36.819050 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:07:36 crc kubenswrapper[4813]: W0217 09:07:36.820911 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79bcfdd1_0319_4bc1_b678_dadd4b8164b6.slice/crio-10decee02c3a9654a30783ee1596b6605e6e72daceeabb602de1e26cf3112d1c WatchSource:0}: Error finding container 10decee02c3a9654a30783ee1596b6605e6e72daceeabb602de1e26cf3112d1c: Status 404 returned error can't find the container with id 10decee02c3a9654a30783ee1596b6605e6e72daceeabb602de1e26cf3112d1c Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.111039 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:07:37 crc kubenswrapper[4813]: E0217 09:07:37.111559 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.120077 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d17a4c77-8c43-43fe-b573-1c242d9ab664" path="/var/lib/kubelet/pods/d17a4c77-8c43-43fe-b573-1c242d9ab664/volumes" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.359902 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.496963 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df2de77-c07e-40b6-8b35-c508fd940200-utilities\") pod \"9df2de77-c07e-40b6-8b35-c508fd940200\" (UID: \"9df2de77-c07e-40b6-8b35-c508fd940200\") " Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.497121 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df2de77-c07e-40b6-8b35-c508fd940200-catalog-content\") pod \"9df2de77-c07e-40b6-8b35-c508fd940200\" (UID: \"9df2de77-c07e-40b6-8b35-c508fd940200\") " Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.497208 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5672\" (UniqueName: \"kubernetes.io/projected/9df2de77-c07e-40b6-8b35-c508fd940200-kube-api-access-w5672\") pod \"9df2de77-c07e-40b6-8b35-c508fd940200\" (UID: \"9df2de77-c07e-40b6-8b35-c508fd940200\") " Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.498371 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df2de77-c07e-40b6-8b35-c508fd940200-utilities" (OuterVolumeSpecName: "utilities") pod "9df2de77-c07e-40b6-8b35-c508fd940200" (UID: "9df2de77-c07e-40b6-8b35-c508fd940200"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.510919 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df2de77-c07e-40b6-8b35-c508fd940200-kube-api-access-w5672" (OuterVolumeSpecName: "kube-api-access-w5672") pod "9df2de77-c07e-40b6-8b35-c508fd940200" (UID: "9df2de77-c07e-40b6-8b35-c508fd940200"). InnerVolumeSpecName "kube-api-access-w5672". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.549747 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9df2de77-c07e-40b6-8b35-c508fd940200-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9df2de77-c07e-40b6-8b35-c508fd940200" (UID: "9df2de77-c07e-40b6-8b35-c508fd940200"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.552767 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79bcfdd1-0319-4bc1-b678-dadd4b8164b6","Type":"ContainerStarted","Data":"10decee02c3a9654a30783ee1596b6605e6e72daceeabb602de1e26cf3112d1c"} Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.556457 4813 generic.go:334] "Generic (PLEG): container finished" podID="9df2de77-c07e-40b6-8b35-c508fd940200" containerID="ec227f859b48ca566c07e077314d48c99be8d6674904bea6ea54b86401e74146" exitCode=0 Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.556495 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qnsfj" event={"ID":"9df2de77-c07e-40b6-8b35-c508fd940200","Type":"ContainerDied","Data":"ec227f859b48ca566c07e077314d48c99be8d6674904bea6ea54b86401e74146"} Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.556520 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qnsfj" event={"ID":"9df2de77-c07e-40b6-8b35-c508fd940200","Type":"ContainerDied","Data":"8721c4e8a51504da5242b11d9b89b986dbc9ed6151bfdc6689274739ada348f9"} Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.556541 4813 scope.go:117] "RemoveContainer" containerID="ec227f859b48ca566c07e077314d48c99be8d6674904bea6ea54b86401e74146" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.556693 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qnsfj" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.586156 4813 scope.go:117] "RemoveContainer" containerID="4e87f5d14da21ee7b44275a6d0c4a7f19785c4f681d862beafadbb39795328b9" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.599427 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5672\" (UniqueName: \"kubernetes.io/projected/9df2de77-c07e-40b6-8b35-c508fd940200-kube-api-access-w5672\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.599597 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9df2de77-c07e-40b6-8b35-c508fd940200-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.599675 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9df2de77-c07e-40b6-8b35-c508fd940200-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.647655 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qnsfj"] Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.654435 4813 scope.go:117] "RemoveContainer" containerID="88c5c3fe5e40d53d48515ea2d67d576611de9f21059ab18765331c07093302ff" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.656348 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qnsfj"] Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.679121 4813 scope.go:117] "RemoveContainer" containerID="ec227f859b48ca566c07e077314d48c99be8d6674904bea6ea54b86401e74146" Feb 17 09:07:37 crc kubenswrapper[4813]: E0217 09:07:37.679590 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec227f859b48ca566c07e077314d48c99be8d6674904bea6ea54b86401e74146\": container with ID starting with ec227f859b48ca566c07e077314d48c99be8d6674904bea6ea54b86401e74146 not found: ID does not exist" containerID="ec227f859b48ca566c07e077314d48c99be8d6674904bea6ea54b86401e74146" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.679629 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec227f859b48ca566c07e077314d48c99be8d6674904bea6ea54b86401e74146"} err="failed to get container status \"ec227f859b48ca566c07e077314d48c99be8d6674904bea6ea54b86401e74146\": rpc error: code = NotFound desc = could not find container \"ec227f859b48ca566c07e077314d48c99be8d6674904bea6ea54b86401e74146\": container with ID starting with ec227f859b48ca566c07e077314d48c99be8d6674904bea6ea54b86401e74146 not found: ID does not exist" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.679652 4813 scope.go:117] "RemoveContainer" containerID="4e87f5d14da21ee7b44275a6d0c4a7f19785c4f681d862beafadbb39795328b9" Feb 17 09:07:37 crc kubenswrapper[4813]: E0217 09:07:37.679958 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e87f5d14da21ee7b44275a6d0c4a7f19785c4f681d862beafadbb39795328b9\": container with ID starting with 4e87f5d14da21ee7b44275a6d0c4a7f19785c4f681d862beafadbb39795328b9 not found: ID does not exist" containerID="4e87f5d14da21ee7b44275a6d0c4a7f19785c4f681d862beafadbb39795328b9" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.679985 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e87f5d14da21ee7b44275a6d0c4a7f19785c4f681d862beafadbb39795328b9"} err="failed to get container status \"4e87f5d14da21ee7b44275a6d0c4a7f19785c4f681d862beafadbb39795328b9\": rpc error: code = NotFound desc = could not find container \"4e87f5d14da21ee7b44275a6d0c4a7f19785c4f681d862beafadbb39795328b9\": container with ID starting with 4e87f5d14da21ee7b44275a6d0c4a7f19785c4f681d862beafadbb39795328b9 not found: ID does not exist" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.680002 4813 scope.go:117] "RemoveContainer" containerID="88c5c3fe5e40d53d48515ea2d67d576611de9f21059ab18765331c07093302ff" Feb 17 09:07:37 crc kubenswrapper[4813]: E0217 09:07:37.680236 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88c5c3fe5e40d53d48515ea2d67d576611de9f21059ab18765331c07093302ff\": container with ID starting with 88c5c3fe5e40d53d48515ea2d67d576611de9f21059ab18765331c07093302ff not found: ID does not exist" containerID="88c5c3fe5e40d53d48515ea2d67d576611de9f21059ab18765331c07093302ff" Feb 17 09:07:37 crc kubenswrapper[4813]: I0217 09:07:37.680289 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c5c3fe5e40d53d48515ea2d67d576611de9f21059ab18765331c07093302ff"} err="failed to get container status \"88c5c3fe5e40d53d48515ea2d67d576611de9f21059ab18765331c07093302ff\": rpc error: code = NotFound desc = could not find container \"88c5c3fe5e40d53d48515ea2d67d576611de9f21059ab18765331c07093302ff\": container with ID starting with 88c5c3fe5e40d53d48515ea2d67d576611de9f21059ab18765331c07093302ff not found: ID does not exist" Feb 17 09:07:38 crc kubenswrapper[4813]: E0217 09:07:38.408811 4813 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.113:42808->38.102.83.113:33079: write tcp 38.102.83.113:42808->38.102.83.113:33079: write: broken pipe Feb 17 09:07:38 crc kubenswrapper[4813]: I0217 09:07:38.576205 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79bcfdd1-0319-4bc1-b678-dadd4b8164b6","Type":"ContainerStarted","Data":"04b6311fd0f59cc421c53f2658dd9089189f5f36f7f46332c760ee092d6d2137"} Feb 17 09:07:38 crc kubenswrapper[4813]: I0217 09:07:38.576259 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79bcfdd1-0319-4bc1-b678-dadd4b8164b6","Type":"ContainerStarted","Data":"79fc34220dcf01aa5b0b912db1b7ecb4178fbaaaf5cd25532e64270002dee79a"} Feb 17 09:07:39 crc kubenswrapper[4813]: I0217 09:07:39.150382 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9df2de77-c07e-40b6-8b35-c508fd940200" path="/var/lib/kubelet/pods/9df2de77-c07e-40b6-8b35-c508fd940200/volumes" Feb 17 09:07:39 crc kubenswrapper[4813]: I0217 09:07:39.589850 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79bcfdd1-0319-4bc1-b678-dadd4b8164b6","Type":"ContainerStarted","Data":"6dd4fe7abb619193aeeec9123cc8aefe8f7f57ccc764ef73bcfea3a08b190246"} Feb 17 09:07:40 crc kubenswrapper[4813]: I0217 09:07:40.605349 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79bcfdd1-0319-4bc1-b678-dadd4b8164b6","Type":"ContainerStarted","Data":"13ddf602fbe134cab28fc4de76c7041107e20fe843d1ed86cd1df466f5d694ab"} Feb 17 09:07:40 crc kubenswrapper[4813]: I0217 09:07:40.606630 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:07:40 crc kubenswrapper[4813]: I0217 09:07:40.637579 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.1531770630000002 podStartE2EDuration="5.637559486s" podCreationTimestamp="2026-02-17 09:07:35 +0000 UTC" firstStartedPulling="2026-02-17 09:07:36.824012271 +0000 UTC m=+1604.484773494" lastFinishedPulling="2026-02-17 09:07:40.308394694 +0000 UTC m=+1607.969155917" observedRunningTime="2026-02-17 09:07:40.631462862 +0000 UTC m=+1608.292224095" watchObservedRunningTime="2026-02-17 09:07:40.637559486 +0000 UTC m=+1608.298320729" Feb 17 09:07:50 crc kubenswrapper[4813]: I0217 09:07:50.507607 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:07:50 crc kubenswrapper[4813]: E0217 09:07:50.508652 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:07:55 crc kubenswrapper[4813]: I0217 09:07:55.034834 4813 scope.go:117] "RemoveContainer" containerID="22252909237a13bef155d264aba57c53fa577b41ebae70a5104af419ea7abc18" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.172175 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh"] Feb 17 09:08:00 crc kubenswrapper[4813]: E0217 09:08:00.173248 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df2de77-c07e-40b6-8b35-c508fd940200" containerName="registry-server" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.173272 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df2de77-c07e-40b6-8b35-c508fd940200" containerName="registry-server" Feb 17 09:08:00 crc kubenswrapper[4813]: E0217 09:08:00.173331 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df2de77-c07e-40b6-8b35-c508fd940200" containerName="extract-utilities" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.173345 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df2de77-c07e-40b6-8b35-c508fd940200" containerName="extract-utilities" Feb 17 09:08:00 crc kubenswrapper[4813]: E0217 09:08:00.173382 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df2de77-c07e-40b6-8b35-c508fd940200" containerName="extract-content" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.173396 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df2de77-c07e-40b6-8b35-c508fd940200" containerName="extract-content" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.173680 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df2de77-c07e-40b6-8b35-c508fd940200" containerName="registry-server" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.174708 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.177941 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-scripts" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.178150 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.195492 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh"] Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.302073 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-scripts-volume\") pod \"watcher-kuttl-db-purge-29521988-c5vmh\" (UID: \"2ed65515-19dc-45e7-810d-7717447f15cd\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.302293 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9spp2\" (UniqueName: \"kubernetes.io/projected/2ed65515-19dc-45e7-810d-7717447f15cd-kube-api-access-9spp2\") pod \"watcher-kuttl-db-purge-29521988-c5vmh\" (UID: \"2ed65515-19dc-45e7-810d-7717447f15cd\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.302422 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29521988-c5vmh\" (UID: \"2ed65515-19dc-45e7-810d-7717447f15cd\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.302587 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-config-data\") pod \"watcher-kuttl-db-purge-29521988-c5vmh\" (UID: \"2ed65515-19dc-45e7-810d-7717447f15cd\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.404247 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9spp2\" (UniqueName: \"kubernetes.io/projected/2ed65515-19dc-45e7-810d-7717447f15cd-kube-api-access-9spp2\") pod \"watcher-kuttl-db-purge-29521988-c5vmh\" (UID: \"2ed65515-19dc-45e7-810d-7717447f15cd\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.404325 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29521988-c5vmh\" (UID: \"2ed65515-19dc-45e7-810d-7717447f15cd\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.404378 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-config-data\") pod \"watcher-kuttl-db-purge-29521988-c5vmh\" (UID: \"2ed65515-19dc-45e7-810d-7717447f15cd\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.404401 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-scripts-volume\") pod \"watcher-kuttl-db-purge-29521988-c5vmh\" (UID: \"2ed65515-19dc-45e7-810d-7717447f15cd\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.410104 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-scripts-volume\") pod \"watcher-kuttl-db-purge-29521988-c5vmh\" (UID: \"2ed65515-19dc-45e7-810d-7717447f15cd\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.410851 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-config-data\") pod \"watcher-kuttl-db-purge-29521988-c5vmh\" (UID: \"2ed65515-19dc-45e7-810d-7717447f15cd\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.414840 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29521988-c5vmh\" (UID: \"2ed65515-19dc-45e7-810d-7717447f15cd\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.427813 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9spp2\" (UniqueName: \"kubernetes.io/projected/2ed65515-19dc-45e7-810d-7717447f15cd-kube-api-access-9spp2\") pod \"watcher-kuttl-db-purge-29521988-c5vmh\" (UID: \"2ed65515-19dc-45e7-810d-7717447f15cd\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" Feb 17 09:08:00 crc kubenswrapper[4813]: I0217 09:08:00.509022 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" Feb 17 09:08:01 crc kubenswrapper[4813]: I0217 09:08:01.074921 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh"] Feb 17 09:08:01 crc kubenswrapper[4813]: I0217 09:08:01.860291 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" event={"ID":"2ed65515-19dc-45e7-810d-7717447f15cd","Type":"ContainerStarted","Data":"583f6bf8e0f063e675f7c35e3e22c9a489439c94b813607cdedc4992240c64ad"} Feb 17 09:08:01 crc kubenswrapper[4813]: I0217 09:08:01.860612 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" event={"ID":"2ed65515-19dc-45e7-810d-7717447f15cd","Type":"ContainerStarted","Data":"85a909437caeb371170b95a31ccd01b7ad52c6851499db8a6776b52fbf02c079"} Feb 17 09:08:01 crc kubenswrapper[4813]: I0217 09:08:01.882226 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" podStartSLOduration=1.882206492 podStartE2EDuration="1.882206492s" podCreationTimestamp="2026-02-17 09:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:08:01.876914971 +0000 UTC m=+1629.537676194" watchObservedRunningTime="2026-02-17 09:08:01.882206492 +0000 UTC m=+1629.542967715" Feb 17 09:08:03 crc kubenswrapper[4813]: I0217 09:08:03.888115 4813 generic.go:334] "Generic (PLEG): container finished" podID="2ed65515-19dc-45e7-810d-7717447f15cd" containerID="583f6bf8e0f063e675f7c35e3e22c9a489439c94b813607cdedc4992240c64ad" exitCode=0 Feb 17 09:08:03 crc kubenswrapper[4813]: I0217 09:08:03.888252 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" event={"ID":"2ed65515-19dc-45e7-810d-7717447f15cd","Type":"ContainerDied","Data":"583f6bf8e0f063e675f7c35e3e22c9a489439c94b813607cdedc4992240c64ad"} Feb 17 09:08:05 crc kubenswrapper[4813]: I0217 09:08:05.278262 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" Feb 17 09:08:05 crc kubenswrapper[4813]: I0217 09:08:05.388938 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-config-data\") pod \"2ed65515-19dc-45e7-810d-7717447f15cd\" (UID: \"2ed65515-19dc-45e7-810d-7717447f15cd\") " Feb 17 09:08:05 crc kubenswrapper[4813]: I0217 09:08:05.389124 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-scripts-volume\") pod \"2ed65515-19dc-45e7-810d-7717447f15cd\" (UID: \"2ed65515-19dc-45e7-810d-7717447f15cd\") " Feb 17 09:08:05 crc kubenswrapper[4813]: I0217 09:08:05.389187 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-combined-ca-bundle\") pod \"2ed65515-19dc-45e7-810d-7717447f15cd\" (UID: \"2ed65515-19dc-45e7-810d-7717447f15cd\") " Feb 17 09:08:05 crc kubenswrapper[4813]: I0217 09:08:05.389213 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9spp2\" (UniqueName: \"kubernetes.io/projected/2ed65515-19dc-45e7-810d-7717447f15cd-kube-api-access-9spp2\") pod \"2ed65515-19dc-45e7-810d-7717447f15cd\" (UID: \"2ed65515-19dc-45e7-810d-7717447f15cd\") " Feb 17 09:08:05 crc kubenswrapper[4813]: I0217 09:08:05.394773 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-scripts-volume" (OuterVolumeSpecName: "scripts-volume") pod "2ed65515-19dc-45e7-810d-7717447f15cd" (UID: "2ed65515-19dc-45e7-810d-7717447f15cd"). InnerVolumeSpecName "scripts-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:05 crc kubenswrapper[4813]: I0217 09:08:05.394873 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed65515-19dc-45e7-810d-7717447f15cd-kube-api-access-9spp2" (OuterVolumeSpecName: "kube-api-access-9spp2") pod "2ed65515-19dc-45e7-810d-7717447f15cd" (UID: "2ed65515-19dc-45e7-810d-7717447f15cd"). InnerVolumeSpecName "kube-api-access-9spp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:08:05 crc kubenswrapper[4813]: I0217 09:08:05.429011 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-config-data" (OuterVolumeSpecName: "config-data") pod "2ed65515-19dc-45e7-810d-7717447f15cd" (UID: "2ed65515-19dc-45e7-810d-7717447f15cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:05 crc kubenswrapper[4813]: I0217 09:08:05.429378 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ed65515-19dc-45e7-810d-7717447f15cd" (UID: "2ed65515-19dc-45e7-810d-7717447f15cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:05 crc kubenswrapper[4813]: I0217 09:08:05.491519 4813 reconciler_common.go:293] "Volume detached for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-scripts-volume\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:05 crc kubenswrapper[4813]: I0217 09:08:05.491551 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:05 crc kubenswrapper[4813]: I0217 09:08:05.491564 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9spp2\" (UniqueName: \"kubernetes.io/projected/2ed65515-19dc-45e7-810d-7717447f15cd-kube-api-access-9spp2\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:05 crc kubenswrapper[4813]: I0217 09:08:05.491580 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed65515-19dc-45e7-810d-7717447f15cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:05 crc kubenswrapper[4813]: I0217 09:08:05.921339 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" event={"ID":"2ed65515-19dc-45e7-810d-7717447f15cd","Type":"ContainerDied","Data":"85a909437caeb371170b95a31ccd01b7ad52c6851499db8a6776b52fbf02c079"} Feb 17 09:08:05 crc kubenswrapper[4813]: I0217 09:08:05.921396 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85a909437caeb371170b95a31ccd01b7ad52c6851499db8a6776b52fbf02c079" Feb 17 09:08:05 crc kubenswrapper[4813]: I0217 09:08:05.921403 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh" Feb 17 09:08:06 crc kubenswrapper[4813]: I0217 09:08:06.111185 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:08:06 crc kubenswrapper[4813]: E0217 09:08:06.111520 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:08:06 crc kubenswrapper[4813]: I0217 09:08:06.310145 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:07 crc kubenswrapper[4813]: I0217 09:08:07.977399 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw"] Feb 17 09:08:07 crc kubenswrapper[4813]: I0217 09:08:07.983211 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-pm9dw"] Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.004425 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh"] Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.010174 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29521988-c5vmh"] Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.054178 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-nt5bd"] Feb 17 09:08:08 crc kubenswrapper[4813]: E0217 09:08:08.054632 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed65515-19dc-45e7-810d-7717447f15cd" containerName="watcher-db-manage" Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.054654 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed65515-19dc-45e7-810d-7717447f15cd" containerName="watcher-db-manage" Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.054859 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed65515-19dc-45e7-810d-7717447f15cd" containerName="watcher-db-manage" Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.055467 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-nt5bd" Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.082369 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-nt5bd"] Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.133076 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.133345 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="50406224-f182-48b7-b5b6-566a3245c830" containerName="watcher-kuttl-api-log" containerID="cri-o://95470e319c4b5981df2319bc24d3e9265728a609a439fd07e37a655b1c0fb746" gracePeriod=30 Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.133677 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="50406224-f182-48b7-b5b6-566a3245c830" containerName="watcher-api" containerID="cri-o://87f047bb6aefe6c1b48a4be6ce2b350d2feaa30c771e483896f28843f6b2422e" gracePeriod=30 Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.136332 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxcmz\" (UniqueName: \"kubernetes.io/projected/e6c74c92-c953-41ab-ae61-e6bd841f056d-kube-api-access-hxcmz\") pod \"watchertest-account-delete-nt5bd\" (UID: \"e6c74c92-c953-41ab-ae61-e6bd841f056d\") " pod="watcher-kuttl-default/watchertest-account-delete-nt5bd" Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.136389 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6c74c92-c953-41ab-ae61-e6bd841f056d-operator-scripts\") pod \"watchertest-account-delete-nt5bd\" (UID: \"e6c74c92-c953-41ab-ae61-e6bd841f056d\") " pod="watcher-kuttl-default/watchertest-account-delete-nt5bd" Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.143428 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.143657 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="99e7302c-1d38-4fb6-8dfd-031f981519b3" containerName="watcher-kuttl-api-log" containerID="cri-o://6e32c236829a1bd6424bea929b2cdbd0d7a619b4bc7b20d7755692b56550433c" gracePeriod=30 Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.144003 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="99e7302c-1d38-4fb6-8dfd-031f981519b3" containerName="watcher-api" containerID="cri-o://31afd182241f1021ffd510f36c6a1e971214d3fcea52a3122f7dabb0087a24a2" gracePeriod=30 Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.158570 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.158774 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="84d61ff1-13a9-4f6b-870a-151ea8de7237" containerName="watcher-applier" containerID="cri-o://8ea91d09b2065f46d9555294568f9b8e038ed5b68829b3eb2ae4bb447d81a0f9" gracePeriod=30 Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.199164 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.199380 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="59a5bd53-62f6-4af2-84a0-eb291da8ec2a" containerName="watcher-decision-engine" containerID="cri-o://884b557b5dedca9bd30d13ba505e401a1de9492b152835c6d48ba67dc8a15a19" gracePeriod=30 Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.237416 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxcmz\" (UniqueName: \"kubernetes.io/projected/e6c74c92-c953-41ab-ae61-e6bd841f056d-kube-api-access-hxcmz\") pod \"watchertest-account-delete-nt5bd\" (UID: \"e6c74c92-c953-41ab-ae61-e6bd841f056d\") " pod="watcher-kuttl-default/watchertest-account-delete-nt5bd" Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.237475 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6c74c92-c953-41ab-ae61-e6bd841f056d-operator-scripts\") pod \"watchertest-account-delete-nt5bd\" (UID: \"e6c74c92-c953-41ab-ae61-e6bd841f056d\") " pod="watcher-kuttl-default/watchertest-account-delete-nt5bd" Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.238198 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6c74c92-c953-41ab-ae61-e6bd841f056d-operator-scripts\") pod \"watchertest-account-delete-nt5bd\" (UID: \"e6c74c92-c953-41ab-ae61-e6bd841f056d\") " pod="watcher-kuttl-default/watchertest-account-delete-nt5bd" Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.258034 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxcmz\" (UniqueName: \"kubernetes.io/projected/e6c74c92-c953-41ab-ae61-e6bd841f056d-kube-api-access-hxcmz\") pod \"watchertest-account-delete-nt5bd\" (UID: \"e6c74c92-c953-41ab-ae61-e6bd841f056d\") " pod="watcher-kuttl-default/watchertest-account-delete-nt5bd" Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.373060 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-nt5bd" Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.863558 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-nt5bd"] Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.943258 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-nt5bd" event={"ID":"e6c74c92-c953-41ab-ae61-e6bd841f056d","Type":"ContainerStarted","Data":"01dd8cc4c056a3244c86b4fca2343755b4543671aaa94d55649fbe42d2fcc66d"} Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.945435 4813 generic.go:334] "Generic (PLEG): container finished" podID="50406224-f182-48b7-b5b6-566a3245c830" containerID="95470e319c4b5981df2319bc24d3e9265728a609a439fd07e37a655b1c0fb746" exitCode=143 Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.945489 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"50406224-f182-48b7-b5b6-566a3245c830","Type":"ContainerDied","Data":"95470e319c4b5981df2319bc24d3e9265728a609a439fd07e37a655b1c0fb746"} Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.948510 4813 generic.go:334] "Generic (PLEG): container finished" podID="99e7302c-1d38-4fb6-8dfd-031f981519b3" containerID="6e32c236829a1bd6424bea929b2cdbd0d7a619b4bc7b20d7755692b56550433c" exitCode=143 Feb 17 09:08:08 crc kubenswrapper[4813]: I0217 09:08:08.948547 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"99e7302c-1d38-4fb6-8dfd-031f981519b3","Type":"ContainerDied","Data":"6e32c236829a1bd6424bea929b2cdbd0d7a619b4bc7b20d7755692b56550433c"} Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.120535 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed65515-19dc-45e7-810d-7717447f15cd" path="/var/lib/kubelet/pods/2ed65515-19dc-45e7-810d-7717447f15cd/volumes" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.121199 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f511bb8c-5900-4729-927c-4cdf62c78aef" path="/var/lib/kubelet/pods/f511bb8c-5900-4729-927c-4cdf62c78aef/volumes" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.476647 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.659233 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcqvx\" (UniqueName: \"kubernetes.io/projected/99e7302c-1d38-4fb6-8dfd-031f981519b3-kube-api-access-xcqvx\") pod \"99e7302c-1d38-4fb6-8dfd-031f981519b3\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.659301 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-combined-ca-bundle\") pod \"99e7302c-1d38-4fb6-8dfd-031f981519b3\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.659409 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99e7302c-1d38-4fb6-8dfd-031f981519b3-logs\") pod \"99e7302c-1d38-4fb6-8dfd-031f981519b3\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.659467 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-cert-memcached-mtls\") pod \"99e7302c-1d38-4fb6-8dfd-031f981519b3\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.659530 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-custom-prometheus-ca\") pod \"99e7302c-1d38-4fb6-8dfd-031f981519b3\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.659560 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-config-data\") pod \"99e7302c-1d38-4fb6-8dfd-031f981519b3\" (UID: \"99e7302c-1d38-4fb6-8dfd-031f981519b3\") " Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.659906 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99e7302c-1d38-4fb6-8dfd-031f981519b3-logs" (OuterVolumeSpecName: "logs") pod "99e7302c-1d38-4fb6-8dfd-031f981519b3" (UID: "99e7302c-1d38-4fb6-8dfd-031f981519b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.667492 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99e7302c-1d38-4fb6-8dfd-031f981519b3-kube-api-access-xcqvx" (OuterVolumeSpecName: "kube-api-access-xcqvx") pod "99e7302c-1d38-4fb6-8dfd-031f981519b3" (UID: "99e7302c-1d38-4fb6-8dfd-031f981519b3"). InnerVolumeSpecName "kube-api-access-xcqvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.685498 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "99e7302c-1d38-4fb6-8dfd-031f981519b3" (UID: "99e7302c-1d38-4fb6-8dfd-031f981519b3"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.704906 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99e7302c-1d38-4fb6-8dfd-031f981519b3" (UID: "99e7302c-1d38-4fb6-8dfd-031f981519b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.719414 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-config-data" (OuterVolumeSpecName: "config-data") pod "99e7302c-1d38-4fb6-8dfd-031f981519b3" (UID: "99e7302c-1d38-4fb6-8dfd-031f981519b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.738692 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.746652 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "99e7302c-1d38-4fb6-8dfd-031f981519b3" (UID: "99e7302c-1d38-4fb6-8dfd-031f981519b3"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.764349 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-custom-prometheus-ca\") pod \"50406224-f182-48b7-b5b6-566a3245c830\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.764413 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-config-data\") pod \"50406224-f182-48b7-b5b6-566a3245c830\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.764462 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82llx\" (UniqueName: \"kubernetes.io/projected/50406224-f182-48b7-b5b6-566a3245c830-kube-api-access-82llx\") pod \"50406224-f182-48b7-b5b6-566a3245c830\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.764528 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-combined-ca-bundle\") pod \"50406224-f182-48b7-b5b6-566a3245c830\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.764573 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-cert-memcached-mtls\") pod \"50406224-f182-48b7-b5b6-566a3245c830\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.764592 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50406224-f182-48b7-b5b6-566a3245c830-logs\") pod \"50406224-f182-48b7-b5b6-566a3245c830\" (UID: \"50406224-f182-48b7-b5b6-566a3245c830\") " Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.764796 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.764814 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.764823 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcqvx\" (UniqueName: \"kubernetes.io/projected/99e7302c-1d38-4fb6-8dfd-031f981519b3-kube-api-access-xcqvx\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.764833 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.764842 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99e7302c-1d38-4fb6-8dfd-031f981519b3-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.764850 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/99e7302c-1d38-4fb6-8dfd-031f981519b3-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.765279 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50406224-f182-48b7-b5b6-566a3245c830-logs" (OuterVolumeSpecName: "logs") pod "50406224-f182-48b7-b5b6-566a3245c830" (UID: "50406224-f182-48b7-b5b6-566a3245c830"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.771533 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50406224-f182-48b7-b5b6-566a3245c830-kube-api-access-82llx" (OuterVolumeSpecName: "kube-api-access-82llx") pod "50406224-f182-48b7-b5b6-566a3245c830" (UID: "50406224-f182-48b7-b5b6-566a3245c830"). InnerVolumeSpecName "kube-api-access-82llx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.790021 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "50406224-f182-48b7-b5b6-566a3245c830" (UID: "50406224-f182-48b7-b5b6-566a3245c830"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.792473 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50406224-f182-48b7-b5b6-566a3245c830" (UID: "50406224-f182-48b7-b5b6-566a3245c830"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.830501 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-config-data" (OuterVolumeSpecName: "config-data") pod "50406224-f182-48b7-b5b6-566a3245c830" (UID: "50406224-f182-48b7-b5b6-566a3245c830"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.832874 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "50406224-f182-48b7-b5b6-566a3245c830" (UID: "50406224-f182-48b7-b5b6-566a3245c830"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.865383 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.865411 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50406224-f182-48b7-b5b6-566a3245c830-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.865420 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.865428 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.865448 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82llx\" (UniqueName: \"kubernetes.io/projected/50406224-f182-48b7-b5b6-566a3245c830-kube-api-access-82llx\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.865458 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50406224-f182-48b7-b5b6-566a3245c830-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.960200 4813 generic.go:334] "Generic (PLEG): container finished" podID="50406224-f182-48b7-b5b6-566a3245c830" containerID="87f047bb6aefe6c1b48a4be6ce2b350d2feaa30c771e483896f28843f6b2422e" exitCode=0 Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.960283 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"50406224-f182-48b7-b5b6-566a3245c830","Type":"ContainerDied","Data":"87f047bb6aefe6c1b48a4be6ce2b350d2feaa30c771e483896f28843f6b2422e"} Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.960346 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"50406224-f182-48b7-b5b6-566a3245c830","Type":"ContainerDied","Data":"ac5629f38383b72f5db8ec769b70717572ba75c82d69dd8e04f5bdbdd64e01f6"} Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.960376 4813 scope.go:117] "RemoveContainer" containerID="87f047bb6aefe6c1b48a4be6ce2b350d2feaa30c771e483896f28843f6b2422e" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.960537 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.969293 4813 generic.go:334] "Generic (PLEG): container finished" podID="99e7302c-1d38-4fb6-8dfd-031f981519b3" containerID="31afd182241f1021ffd510f36c6a1e971214d3fcea52a3122f7dabb0087a24a2" exitCode=0 Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.969430 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.969441 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"99e7302c-1d38-4fb6-8dfd-031f981519b3","Type":"ContainerDied","Data":"31afd182241f1021ffd510f36c6a1e971214d3fcea52a3122f7dabb0087a24a2"} Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.969482 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"99e7302c-1d38-4fb6-8dfd-031f981519b3","Type":"ContainerDied","Data":"25f77ce526d5288ed37eda1c084ba90c2591032b1c3930b3837c794aed6449a8"} Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.974209 4813 generic.go:334] "Generic (PLEG): container finished" podID="e6c74c92-c953-41ab-ae61-e6bd841f056d" containerID="eb660b5deca9b71a2ae1508e94f6722b9c33e3f59b281a72515c268ed27439d2" exitCode=0 Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.974236 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-nt5bd" event={"ID":"e6c74c92-c953-41ab-ae61-e6bd841f056d","Type":"ContainerDied","Data":"eb660b5deca9b71a2ae1508e94f6722b9c33e3f59b281a72515c268ed27439d2"} Feb 17 09:08:09 crc kubenswrapper[4813]: I0217 09:08:09.991281 4813 scope.go:117] "RemoveContainer" containerID="95470e319c4b5981df2319bc24d3e9265728a609a439fd07e37a655b1c0fb746" Feb 17 09:08:10 crc kubenswrapper[4813]: I0217 09:08:10.018291 4813 scope.go:117] "RemoveContainer" containerID="87f047bb6aefe6c1b48a4be6ce2b350d2feaa30c771e483896f28843f6b2422e" Feb 17 09:08:10 crc kubenswrapper[4813]: E0217 09:08:10.020984 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87f047bb6aefe6c1b48a4be6ce2b350d2feaa30c771e483896f28843f6b2422e\": container with ID starting with 87f047bb6aefe6c1b48a4be6ce2b350d2feaa30c771e483896f28843f6b2422e not found: ID does not exist" containerID="87f047bb6aefe6c1b48a4be6ce2b350d2feaa30c771e483896f28843f6b2422e" Feb 17 09:08:10 crc kubenswrapper[4813]: I0217 09:08:10.021018 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87f047bb6aefe6c1b48a4be6ce2b350d2feaa30c771e483896f28843f6b2422e"} err="failed to get container status \"87f047bb6aefe6c1b48a4be6ce2b350d2feaa30c771e483896f28843f6b2422e\": rpc error: code = NotFound desc = could not find container \"87f047bb6aefe6c1b48a4be6ce2b350d2feaa30c771e483896f28843f6b2422e\": container with ID starting with 87f047bb6aefe6c1b48a4be6ce2b350d2feaa30c771e483896f28843f6b2422e not found: ID does not exist" Feb 17 09:08:10 crc kubenswrapper[4813]: I0217 09:08:10.021037 4813 scope.go:117] "RemoveContainer" containerID="95470e319c4b5981df2319bc24d3e9265728a609a439fd07e37a655b1c0fb746" Feb 17 09:08:10 crc kubenswrapper[4813]: I0217 09:08:10.021093 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 17 09:08:10 crc kubenswrapper[4813]: E0217 09:08:10.021776 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95470e319c4b5981df2319bc24d3e9265728a609a439fd07e37a655b1c0fb746\": container with ID starting with 95470e319c4b5981df2319bc24d3e9265728a609a439fd07e37a655b1c0fb746 not found: ID does not exist" containerID="95470e319c4b5981df2319bc24d3e9265728a609a439fd07e37a655b1c0fb746" Feb 17 09:08:10 crc kubenswrapper[4813]: I0217 09:08:10.021798 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95470e319c4b5981df2319bc24d3e9265728a609a439fd07e37a655b1c0fb746"} err="failed to get container status \"95470e319c4b5981df2319bc24d3e9265728a609a439fd07e37a655b1c0fb746\": rpc error: code = NotFound desc = could not find container \"95470e319c4b5981df2319bc24d3e9265728a609a439fd07e37a655b1c0fb746\": container with ID starting with 95470e319c4b5981df2319bc24d3e9265728a609a439fd07e37a655b1c0fb746 not found: ID does not exist" Feb 17 09:08:10 crc kubenswrapper[4813]: I0217 09:08:10.021811 4813 scope.go:117] "RemoveContainer" containerID="31afd182241f1021ffd510f36c6a1e971214d3fcea52a3122f7dabb0087a24a2" Feb 17 09:08:10 crc kubenswrapper[4813]: I0217 09:08:10.031677 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 17 09:08:10 crc kubenswrapper[4813]: I0217 09:08:10.047036 4813 scope.go:117] "RemoveContainer" containerID="6e32c236829a1bd6424bea929b2cdbd0d7a619b4bc7b20d7755692b56550433c" Feb 17 09:08:10 crc kubenswrapper[4813]: I0217 09:08:10.073555 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:08:10 crc kubenswrapper[4813]: I0217 09:08:10.078549 4813 scope.go:117] "RemoveContainer" containerID="31afd182241f1021ffd510f36c6a1e971214d3fcea52a3122f7dabb0087a24a2" Feb 17 09:08:10 crc kubenswrapper[4813]: E0217 09:08:10.079441 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31afd182241f1021ffd510f36c6a1e971214d3fcea52a3122f7dabb0087a24a2\": container with ID starting with 31afd182241f1021ffd510f36c6a1e971214d3fcea52a3122f7dabb0087a24a2 not found: ID does not exist" containerID="31afd182241f1021ffd510f36c6a1e971214d3fcea52a3122f7dabb0087a24a2" Feb 17 09:08:10 crc kubenswrapper[4813]: I0217 09:08:10.079496 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31afd182241f1021ffd510f36c6a1e971214d3fcea52a3122f7dabb0087a24a2"} err="failed to get container status \"31afd182241f1021ffd510f36c6a1e971214d3fcea52a3122f7dabb0087a24a2\": rpc error: code = NotFound desc = could not find container \"31afd182241f1021ffd510f36c6a1e971214d3fcea52a3122f7dabb0087a24a2\": container with ID starting with 31afd182241f1021ffd510f36c6a1e971214d3fcea52a3122f7dabb0087a24a2 not found: ID does not exist" Feb 17 09:08:10 crc kubenswrapper[4813]: I0217 09:08:10.079528 4813 scope.go:117] "RemoveContainer" containerID="6e32c236829a1bd6424bea929b2cdbd0d7a619b4bc7b20d7755692b56550433c" Feb 17 09:08:10 crc kubenswrapper[4813]: E0217 09:08:10.081192 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e32c236829a1bd6424bea929b2cdbd0d7a619b4bc7b20d7755692b56550433c\": container with ID starting with 6e32c236829a1bd6424bea929b2cdbd0d7a619b4bc7b20d7755692b56550433c not found: ID does not exist" containerID="6e32c236829a1bd6424bea929b2cdbd0d7a619b4bc7b20d7755692b56550433c" Feb 17 09:08:10 crc kubenswrapper[4813]: I0217 09:08:10.081251 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e32c236829a1bd6424bea929b2cdbd0d7a619b4bc7b20d7755692b56550433c"} err="failed to get container status \"6e32c236829a1bd6424bea929b2cdbd0d7a619b4bc7b20d7755692b56550433c\": rpc error: code = NotFound desc = could not find container \"6e32c236829a1bd6424bea929b2cdbd0d7a619b4bc7b20d7755692b56550433c\": container with ID starting with 6e32c236829a1bd6424bea929b2cdbd0d7a619b4bc7b20d7755692b56550433c not found: ID does not exist" Feb 17 09:08:10 crc kubenswrapper[4813]: I0217 09:08:10.084359 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.141961 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50406224-f182-48b7-b5b6-566a3245c830" path="/var/lib/kubelet/pods/50406224-f182-48b7-b5b6-566a3245c830/volumes" Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.142592 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99e7302c-1d38-4fb6-8dfd-031f981519b3" path="/var/lib/kubelet/pods/99e7302c-1d38-4fb6-8dfd-031f981519b3/volumes" Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.205170 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.205685 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerName="ceilometer-central-agent" containerID="cri-o://79fc34220dcf01aa5b0b912db1b7ecb4178fbaaaf5cd25532e64270002dee79a" gracePeriod=30 Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.205793 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerName="proxy-httpd" containerID="cri-o://13ddf602fbe134cab28fc4de76c7041107e20fe843d1ed86cd1df466f5d694ab" gracePeriod=30 Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.205831 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerName="sg-core" containerID="cri-o://6dd4fe7abb619193aeeec9123cc8aefe8f7f57ccc764ef73bcfea3a08b190246" gracePeriod=30 Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.205861 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerName="ceilometer-notification-agent" containerID="cri-o://04b6311fd0f59cc421c53f2658dd9089189f5f36f7f46332c760ee092d6d2137" gracePeriod=30 Feb 17 09:08:11 crc kubenswrapper[4813]: E0217 09:08:11.373838 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79bcfdd1_0319_4bc1_b678_dadd4b8164b6.slice/crio-conmon-6dd4fe7abb619193aeeec9123cc8aefe8f7f57ccc764ef73bcfea3a08b190246.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79bcfdd1_0319_4bc1_b678_dadd4b8164b6.slice/crio-6dd4fe7abb619193aeeec9123cc8aefe8f7f57ccc764ef73bcfea3a08b190246.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79bcfdd1_0319_4bc1_b678_dadd4b8164b6.slice/crio-13ddf602fbe134cab28fc4de76c7041107e20fe843d1ed86cd1df466f5d694ab.scope\": RecentStats: unable to find data in memory cache]" Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.475579 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-nt5bd" Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.597890 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxcmz\" (UniqueName: \"kubernetes.io/projected/e6c74c92-c953-41ab-ae61-e6bd841f056d-kube-api-access-hxcmz\") pod \"e6c74c92-c953-41ab-ae61-e6bd841f056d\" (UID: \"e6c74c92-c953-41ab-ae61-e6bd841f056d\") " Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.598924 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6c74c92-c953-41ab-ae61-e6bd841f056d-operator-scripts\") pod \"e6c74c92-c953-41ab-ae61-e6bd841f056d\" (UID: \"e6c74c92-c953-41ab-ae61-e6bd841f056d\") " Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.599522 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6c74c92-c953-41ab-ae61-e6bd841f056d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6c74c92-c953-41ab-ae61-e6bd841f056d" (UID: "e6c74c92-c953-41ab-ae61-e6bd841f056d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.604239 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6c74c92-c953-41ab-ae61-e6bd841f056d-kube-api-access-hxcmz" (OuterVolumeSpecName: "kube-api-access-hxcmz") pod "e6c74c92-c953-41ab-ae61-e6bd841f056d" (UID: "e6c74c92-c953-41ab-ae61-e6bd841f056d"). InnerVolumeSpecName "kube-api-access-hxcmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.700468 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxcmz\" (UniqueName: \"kubernetes.io/projected/e6c74c92-c953-41ab-ae61-e6bd841f056d-kube-api-access-hxcmz\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.700821 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6c74c92-c953-41ab-ae61-e6bd841f056d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.983011 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.990295 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-nt5bd" Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.991341 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-nt5bd" event={"ID":"e6c74c92-c953-41ab-ae61-e6bd841f056d","Type":"ContainerDied","Data":"01dd8cc4c056a3244c86b4fca2343755b4543671aaa94d55649fbe42d2fcc66d"} Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.991376 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01dd8cc4c056a3244c86b4fca2343755b4543671aaa94d55649fbe42d2fcc66d" Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.997652 4813 generic.go:334] "Generic (PLEG): container finished" podID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerID="13ddf602fbe134cab28fc4de76c7041107e20fe843d1ed86cd1df466f5d694ab" exitCode=0 Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.997683 4813 generic.go:334] "Generic (PLEG): container finished" podID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerID="6dd4fe7abb619193aeeec9123cc8aefe8f7f57ccc764ef73bcfea3a08b190246" exitCode=2 Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.997693 4813 generic.go:334] "Generic (PLEG): container finished" podID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerID="79fc34220dcf01aa5b0b912db1b7ecb4178fbaaaf5cd25532e64270002dee79a" exitCode=0 Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.997721 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79bcfdd1-0319-4bc1-b678-dadd4b8164b6","Type":"ContainerDied","Data":"13ddf602fbe134cab28fc4de76c7041107e20fe843d1ed86cd1df466f5d694ab"} Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.997760 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79bcfdd1-0319-4bc1-b678-dadd4b8164b6","Type":"ContainerDied","Data":"6dd4fe7abb619193aeeec9123cc8aefe8f7f57ccc764ef73bcfea3a08b190246"} Feb 17 09:08:11 crc kubenswrapper[4813]: I0217 09:08:11.997772 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79bcfdd1-0319-4bc1-b678-dadd4b8164b6","Type":"ContainerDied","Data":"79fc34220dcf01aa5b0b912db1b7ecb4178fbaaaf5cd25532e64270002dee79a"} Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.001058 4813 generic.go:334] "Generic (PLEG): container finished" podID="84d61ff1-13a9-4f6b-870a-151ea8de7237" containerID="8ea91d09b2065f46d9555294568f9b8e038ed5b68829b3eb2ae4bb447d81a0f9" exitCode=0 Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.001092 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"84d61ff1-13a9-4f6b-870a-151ea8de7237","Type":"ContainerDied","Data":"8ea91d09b2065f46d9555294568f9b8e038ed5b68829b3eb2ae4bb447d81a0f9"} Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.001116 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"84d61ff1-13a9-4f6b-870a-151ea8de7237","Type":"ContainerDied","Data":"be5bdd4e8602dc718f6b3fc2df1f6819c31262fad7f45d9bc0454ce4f49239a1"} Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.001124 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.001131 4813 scope.go:117] "RemoveContainer" containerID="8ea91d09b2065f46d9555294568f9b8e038ed5b68829b3eb2ae4bb447d81a0f9" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.029418 4813 scope.go:117] "RemoveContainer" containerID="8ea91d09b2065f46d9555294568f9b8e038ed5b68829b3eb2ae4bb447d81a0f9" Feb 17 09:08:12 crc kubenswrapper[4813]: E0217 09:08:12.029942 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ea91d09b2065f46d9555294568f9b8e038ed5b68829b3eb2ae4bb447d81a0f9\": container with ID starting with 8ea91d09b2065f46d9555294568f9b8e038ed5b68829b3eb2ae4bb447d81a0f9 not found: ID does not exist" containerID="8ea91d09b2065f46d9555294568f9b8e038ed5b68829b3eb2ae4bb447d81a0f9" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.029984 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ea91d09b2065f46d9555294568f9b8e038ed5b68829b3eb2ae4bb447d81a0f9"} err="failed to get container status \"8ea91d09b2065f46d9555294568f9b8e038ed5b68829b3eb2ae4bb447d81a0f9\": rpc error: code = NotFound desc = could not find container \"8ea91d09b2065f46d9555294568f9b8e038ed5b68829b3eb2ae4bb447d81a0f9\": container with ID starting with 8ea91d09b2065f46d9555294568f9b8e038ed5b68829b3eb2ae4bb447d81a0f9 not found: ID does not exist" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.106841 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-combined-ca-bundle\") pod \"84d61ff1-13a9-4f6b-870a-151ea8de7237\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.106910 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-cert-memcached-mtls\") pod \"84d61ff1-13a9-4f6b-870a-151ea8de7237\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.106956 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84d61ff1-13a9-4f6b-870a-151ea8de7237-logs\") pod \"84d61ff1-13a9-4f6b-870a-151ea8de7237\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.106985 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t79jp\" (UniqueName: \"kubernetes.io/projected/84d61ff1-13a9-4f6b-870a-151ea8de7237-kube-api-access-t79jp\") pod \"84d61ff1-13a9-4f6b-870a-151ea8de7237\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.107007 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-config-data\") pod \"84d61ff1-13a9-4f6b-870a-151ea8de7237\" (UID: \"84d61ff1-13a9-4f6b-870a-151ea8de7237\") " Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.107800 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84d61ff1-13a9-4f6b-870a-151ea8de7237-logs" (OuterVolumeSpecName: "logs") pod "84d61ff1-13a9-4f6b-870a-151ea8de7237" (UID: "84d61ff1-13a9-4f6b-870a-151ea8de7237"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.110659 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d61ff1-13a9-4f6b-870a-151ea8de7237-kube-api-access-t79jp" (OuterVolumeSpecName: "kube-api-access-t79jp") pod "84d61ff1-13a9-4f6b-870a-151ea8de7237" (UID: "84d61ff1-13a9-4f6b-870a-151ea8de7237"). InnerVolumeSpecName "kube-api-access-t79jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.139733 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84d61ff1-13a9-4f6b-870a-151ea8de7237" (UID: "84d61ff1-13a9-4f6b-870a-151ea8de7237"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.147997 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-config-data" (OuterVolumeSpecName: "config-data") pod "84d61ff1-13a9-4f6b-870a-151ea8de7237" (UID: "84d61ff1-13a9-4f6b-870a-151ea8de7237"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.169809 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "84d61ff1-13a9-4f6b-870a-151ea8de7237" (UID: "84d61ff1-13a9-4f6b-870a-151ea8de7237"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.208461 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84d61ff1-13a9-4f6b-870a-151ea8de7237-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.208506 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t79jp\" (UniqueName: \"kubernetes.io/projected/84d61ff1-13a9-4f6b-870a-151ea8de7237-kube-api-access-t79jp\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.208520 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.208532 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.208546 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/84d61ff1-13a9-4f6b-870a-151ea8de7237-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.482647 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.488453 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.830288 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.926492 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-logs\") pod \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.927255 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-combined-ca-bundle\") pod \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.927796 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw9md\" (UniqueName: \"kubernetes.io/projected/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-kube-api-access-bw9md\") pod \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.927927 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-cert-memcached-mtls\") pod \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.928093 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-config-data\") pod \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.928187 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-custom-prometheus-ca\") pod \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\" (UID: \"59a5bd53-62f6-4af2-84a0-eb291da8ec2a\") " Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.927187 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-logs" (OuterVolumeSpecName: "logs") pod "59a5bd53-62f6-4af2-84a0-eb291da8ec2a" (UID: "59a5bd53-62f6-4af2-84a0-eb291da8ec2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.928693 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.948574 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-kube-api-access-bw9md" (OuterVolumeSpecName: "kube-api-access-bw9md") pod "59a5bd53-62f6-4af2-84a0-eb291da8ec2a" (UID: "59a5bd53-62f6-4af2-84a0-eb291da8ec2a"). InnerVolumeSpecName "kube-api-access-bw9md". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.964743 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59a5bd53-62f6-4af2-84a0-eb291da8ec2a" (UID: "59a5bd53-62f6-4af2-84a0-eb291da8ec2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:12 crc kubenswrapper[4813]: I0217 09:08:12.978714 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "59a5bd53-62f6-4af2-84a0-eb291da8ec2a" (UID: "59a5bd53-62f6-4af2-84a0-eb291da8ec2a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.008988 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-config-data" (OuterVolumeSpecName: "config-data") pod "59a5bd53-62f6-4af2-84a0-eb291da8ec2a" (UID: "59a5bd53-62f6-4af2-84a0-eb291da8ec2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.017692 4813 generic.go:334] "Generic (PLEG): container finished" podID="59a5bd53-62f6-4af2-84a0-eb291da8ec2a" containerID="884b557b5dedca9bd30d13ba505e401a1de9492b152835c6d48ba67dc8a15a19" exitCode=0 Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.017753 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"59a5bd53-62f6-4af2-84a0-eb291da8ec2a","Type":"ContainerDied","Data":"884b557b5dedca9bd30d13ba505e401a1de9492b152835c6d48ba67dc8a15a19"} Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.017777 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"59a5bd53-62f6-4af2-84a0-eb291da8ec2a","Type":"ContainerDied","Data":"7349bab98b0ef835307d96d200f881faeb9dc3df16587debe1e27a485bb319fd"} Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.017793 4813 scope.go:117] "RemoveContainer" containerID="884b557b5dedca9bd30d13ba505e401a1de9492b152835c6d48ba67dc8a15a19" Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.017875 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.029610 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.029639 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw9md\" (UniqueName: \"kubernetes.io/projected/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-kube-api-access-bw9md\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.029648 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.029657 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.029994 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "59a5bd53-62f6-4af2-84a0-eb291da8ec2a" (UID: "59a5bd53-62f6-4af2-84a0-eb291da8ec2a"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.049472 4813 scope.go:117] "RemoveContainer" containerID="884b557b5dedca9bd30d13ba505e401a1de9492b152835c6d48ba67dc8a15a19" Feb 17 09:08:13 crc kubenswrapper[4813]: E0217 09:08:13.049880 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"884b557b5dedca9bd30d13ba505e401a1de9492b152835c6d48ba67dc8a15a19\": container with ID starting with 884b557b5dedca9bd30d13ba505e401a1de9492b152835c6d48ba67dc8a15a19 not found: ID does not exist" containerID="884b557b5dedca9bd30d13ba505e401a1de9492b152835c6d48ba67dc8a15a19" Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.049909 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"884b557b5dedca9bd30d13ba505e401a1de9492b152835c6d48ba67dc8a15a19"} err="failed to get container status \"884b557b5dedca9bd30d13ba505e401a1de9492b152835c6d48ba67dc8a15a19\": rpc error: code = NotFound desc = could not find container \"884b557b5dedca9bd30d13ba505e401a1de9492b152835c6d48ba67dc8a15a19\": container with ID starting with 884b557b5dedca9bd30d13ba505e401a1de9492b152835c6d48ba67dc8a15a19 not found: ID does not exist" Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.112444 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-n9g4n"] Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.132348 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/59a5bd53-62f6-4af2-84a0-eb291da8ec2a-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.143677 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84d61ff1-13a9-4f6b-870a-151ea8de7237" path="/var/lib/kubelet/pods/84d61ff1-13a9-4f6b-870a-151ea8de7237/volumes" Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.144330 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-n9g4n"] Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.144360 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-nt5bd"] Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.153426 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-nt5bd"] Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.165419 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-z7nnf"] Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.166644 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-z7nnf"] Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.421029 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:08:13 crc kubenswrapper[4813]: I0217 09:08:13.432257 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.184591 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-n8n86"] Feb 17 09:08:14 crc kubenswrapper[4813]: E0217 09:08:14.184975 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50406224-f182-48b7-b5b6-566a3245c830" containerName="watcher-kuttl-api-log" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.184995 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="50406224-f182-48b7-b5b6-566a3245c830" containerName="watcher-kuttl-api-log" Feb 17 09:08:14 crc kubenswrapper[4813]: E0217 09:08:14.185013 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a5bd53-62f6-4af2-84a0-eb291da8ec2a" containerName="watcher-decision-engine" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.185020 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a5bd53-62f6-4af2-84a0-eb291da8ec2a" containerName="watcher-decision-engine" Feb 17 09:08:14 crc kubenswrapper[4813]: E0217 09:08:14.185029 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50406224-f182-48b7-b5b6-566a3245c830" containerName="watcher-api" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.185036 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="50406224-f182-48b7-b5b6-566a3245c830" containerName="watcher-api" Feb 17 09:08:14 crc kubenswrapper[4813]: E0217 09:08:14.185048 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99e7302c-1d38-4fb6-8dfd-031f981519b3" containerName="watcher-kuttl-api-log" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.185055 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="99e7302c-1d38-4fb6-8dfd-031f981519b3" containerName="watcher-kuttl-api-log" Feb 17 09:08:14 crc kubenswrapper[4813]: E0217 09:08:14.185064 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c74c92-c953-41ab-ae61-e6bd841f056d" containerName="mariadb-account-delete" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.185071 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c74c92-c953-41ab-ae61-e6bd841f056d" containerName="mariadb-account-delete" Feb 17 09:08:14 crc kubenswrapper[4813]: E0217 09:08:14.185083 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d61ff1-13a9-4f6b-870a-151ea8de7237" containerName="watcher-applier" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.185090 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d61ff1-13a9-4f6b-870a-151ea8de7237" containerName="watcher-applier" Feb 17 09:08:14 crc kubenswrapper[4813]: E0217 09:08:14.185113 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99e7302c-1d38-4fb6-8dfd-031f981519b3" containerName="watcher-api" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.185119 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="99e7302c-1d38-4fb6-8dfd-031f981519b3" containerName="watcher-api" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.185296 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="50406224-f182-48b7-b5b6-566a3245c830" containerName="watcher-api" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.185373 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6c74c92-c953-41ab-ae61-e6bd841f056d" containerName="mariadb-account-delete" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.185393 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="50406224-f182-48b7-b5b6-566a3245c830" containerName="watcher-kuttl-api-log" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.185412 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d61ff1-13a9-4f6b-870a-151ea8de7237" containerName="watcher-applier" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.185421 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="59a5bd53-62f6-4af2-84a0-eb291da8ec2a" containerName="watcher-decision-engine" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.185433 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="99e7302c-1d38-4fb6-8dfd-031f981519b3" containerName="watcher-kuttl-api-log" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.185443 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="99e7302c-1d38-4fb6-8dfd-031f981519b3" containerName="watcher-api" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.186071 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-n8n86" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.198199 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-n8n86"] Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.249131 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzcxw\" (UniqueName: \"kubernetes.io/projected/16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9-kube-api-access-jzcxw\") pod \"watcher-db-create-n8n86\" (UID: \"16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9\") " pod="watcher-kuttl-default/watcher-db-create-n8n86" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.249227 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9-operator-scripts\") pod \"watcher-db-create-n8n86\" (UID: \"16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9\") " pod="watcher-kuttl-default/watcher-db-create-n8n86" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.277183 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg"] Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.278484 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.280654 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.289280 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg"] Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.359888 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzcxw\" (UniqueName: \"kubernetes.io/projected/16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9-kube-api-access-jzcxw\") pod \"watcher-db-create-n8n86\" (UID: \"16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9\") " pod="watcher-kuttl-default/watcher-db-create-n8n86" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.359984 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9-operator-scripts\") pod \"watcher-db-create-n8n86\" (UID: \"16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9\") " pod="watcher-kuttl-default/watcher-db-create-n8n86" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.360042 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afd68cea-e70d-4575-83e6-5bf81ede6566-operator-scripts\") pod \"watcher-2b33-account-create-update-h8lqg\" (UID: \"afd68cea-e70d-4575-83e6-5bf81ede6566\") " pod="watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.360074 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl7rs\" (UniqueName: \"kubernetes.io/projected/afd68cea-e70d-4575-83e6-5bf81ede6566-kube-api-access-vl7rs\") pod \"watcher-2b33-account-create-update-h8lqg\" (UID: \"afd68cea-e70d-4575-83e6-5bf81ede6566\") " pod="watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.361085 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9-operator-scripts\") pod \"watcher-db-create-n8n86\" (UID: \"16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9\") " pod="watcher-kuttl-default/watcher-db-create-n8n86" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.399230 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzcxw\" (UniqueName: \"kubernetes.io/projected/16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9-kube-api-access-jzcxw\") pod \"watcher-db-create-n8n86\" (UID: \"16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9\") " pod="watcher-kuttl-default/watcher-db-create-n8n86" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.461286 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afd68cea-e70d-4575-83e6-5bf81ede6566-operator-scripts\") pod \"watcher-2b33-account-create-update-h8lqg\" (UID: \"afd68cea-e70d-4575-83e6-5bf81ede6566\") " pod="watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.461350 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl7rs\" (UniqueName: \"kubernetes.io/projected/afd68cea-e70d-4575-83e6-5bf81ede6566-kube-api-access-vl7rs\") pod \"watcher-2b33-account-create-update-h8lqg\" (UID: \"afd68cea-e70d-4575-83e6-5bf81ede6566\") " pod="watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.462255 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afd68cea-e70d-4575-83e6-5bf81ede6566-operator-scripts\") pod \"watcher-2b33-account-create-update-h8lqg\" (UID: \"afd68cea-e70d-4575-83e6-5bf81ede6566\") " pod="watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.484872 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl7rs\" (UniqueName: \"kubernetes.io/projected/afd68cea-e70d-4575-83e6-5bf81ede6566-kube-api-access-vl7rs\") pod \"watcher-2b33-account-create-update-h8lqg\" (UID: \"afd68cea-e70d-4575-83e6-5bf81ede6566\") " pod="watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.511690 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-n8n86" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.652261 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.671418 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.764594 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-log-httpd\") pod \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.764644 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-scripts\") pod \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.764688 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-sg-core-conf-yaml\") pod \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.764730 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-combined-ca-bundle\") pod \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.764754 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-run-httpd\") pod \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.764779 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcctg\" (UniqueName: \"kubernetes.io/projected/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-kube-api-access-qcctg\") pod \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.764813 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-ceilometer-tls-certs\") pod \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.764845 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-config-data\") pod \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\" (UID: \"79bcfdd1-0319-4bc1-b678-dadd4b8164b6\") " Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.766917 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "79bcfdd1-0319-4bc1-b678-dadd4b8164b6" (UID: "79bcfdd1-0319-4bc1-b678-dadd4b8164b6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.767400 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "79bcfdd1-0319-4bc1-b678-dadd4b8164b6" (UID: "79bcfdd1-0319-4bc1-b678-dadd4b8164b6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.769284 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-kube-api-access-qcctg" (OuterVolumeSpecName: "kube-api-access-qcctg") pod "79bcfdd1-0319-4bc1-b678-dadd4b8164b6" (UID: "79bcfdd1-0319-4bc1-b678-dadd4b8164b6"). InnerVolumeSpecName "kube-api-access-qcctg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.770179 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-scripts" (OuterVolumeSpecName: "scripts") pod "79bcfdd1-0319-4bc1-b678-dadd4b8164b6" (UID: "79bcfdd1-0319-4bc1-b678-dadd4b8164b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.820426 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "79bcfdd1-0319-4bc1-b678-dadd4b8164b6" (UID: "79bcfdd1-0319-4bc1-b678-dadd4b8164b6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.831704 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "79bcfdd1-0319-4bc1-b678-dadd4b8164b6" (UID: "79bcfdd1-0319-4bc1-b678-dadd4b8164b6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.866109 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.866134 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcctg\" (UniqueName: \"kubernetes.io/projected/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-kube-api-access-qcctg\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.866143 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.866152 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.866160 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.866173 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.874563 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79bcfdd1-0319-4bc1-b678-dadd4b8164b6" (UID: "79bcfdd1-0319-4bc1-b678-dadd4b8164b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.891833 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-config-data" (OuterVolumeSpecName: "config-data") pod "79bcfdd1-0319-4bc1-b678-dadd4b8164b6" (UID: "79bcfdd1-0319-4bc1-b678-dadd4b8164b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.967471 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:14 crc kubenswrapper[4813]: I0217 09:08:14.967502 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79bcfdd1-0319-4bc1-b678-dadd4b8164b6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.017064 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-n8n86"] Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.037079 4813 generic.go:334] "Generic (PLEG): container finished" podID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerID="04b6311fd0f59cc421c53f2658dd9089189f5f36f7f46332c760ee092d6d2137" exitCode=0 Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.037131 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79bcfdd1-0319-4bc1-b678-dadd4b8164b6","Type":"ContainerDied","Data":"04b6311fd0f59cc421c53f2658dd9089189f5f36f7f46332c760ee092d6d2137"} Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.037159 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.037191 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"79bcfdd1-0319-4bc1-b678-dadd4b8164b6","Type":"ContainerDied","Data":"10decee02c3a9654a30783ee1596b6605e6e72daceeabb602de1e26cf3112d1c"} Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.037229 4813 scope.go:117] "RemoveContainer" containerID="13ddf602fbe134cab28fc4de76c7041107e20fe843d1ed86cd1df466f5d694ab" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.038464 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-n8n86" event={"ID":"16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9","Type":"ContainerStarted","Data":"bef5d2299595b63b80bb2269ef1c26c1ac63a976a8b9786715326d07fed091fa"} Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.068847 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.070238 4813 scope.go:117] "RemoveContainer" containerID="6dd4fe7abb619193aeeec9123cc8aefe8f7f57ccc764ef73bcfea3a08b190246" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.075869 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.095355 4813 scope.go:117] "RemoveContainer" containerID="04b6311fd0f59cc421c53f2658dd9089189f5f36f7f46332c760ee092d6d2137" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.097514 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:08:15 crc kubenswrapper[4813]: E0217 09:08:15.097800 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerName="ceilometer-notification-agent" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.097817 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerName="ceilometer-notification-agent" Feb 17 09:08:15 crc kubenswrapper[4813]: E0217 09:08:15.097831 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerName="proxy-httpd" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.097838 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerName="proxy-httpd" Feb 17 09:08:15 crc kubenswrapper[4813]: E0217 09:08:15.097851 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerName="sg-core" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.097857 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerName="sg-core" Feb 17 09:08:15 crc kubenswrapper[4813]: E0217 09:08:15.097875 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerName="ceilometer-central-agent" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.097881 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerName="ceilometer-central-agent" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.098013 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerName="ceilometer-central-agent" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.098024 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerName="proxy-httpd" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.098038 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerName="sg-core" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.098058 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" containerName="ceilometer-notification-agent" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.099883 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.103631 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.104795 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.105231 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.126271 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c5db388-49f3-42fa-839d-bf0eaefa3576" path="/var/lib/kubelet/pods/4c5db388-49f3-42fa-839d-bf0eaefa3576/volumes" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.128384 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59a5bd53-62f6-4af2-84a0-eb291da8ec2a" path="/var/lib/kubelet/pods/59a5bd53-62f6-4af2-84a0-eb291da8ec2a/volumes" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.129003 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79bcfdd1-0319-4bc1-b678-dadd4b8164b6" path="/var/lib/kubelet/pods/79bcfdd1-0319-4bc1-b678-dadd4b8164b6/volumes" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.131566 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="908296ce-c930-4c42-aff4-dbd27fe4c613" path="/var/lib/kubelet/pods/908296ce-c930-4c42-aff4-dbd27fe4c613/volumes" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.132034 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6c74c92-c953-41ab-ae61-e6bd841f056d" path="/var/lib/kubelet/pods/e6c74c92-c953-41ab-ae61-e6bd841f056d/volumes" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.134342 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.147517 4813 scope.go:117] "RemoveContainer" containerID="79fc34220dcf01aa5b0b912db1b7ecb4178fbaaaf5cd25532e64270002dee79a" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.168420 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg"] Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.171458 4813 scope.go:117] "RemoveContainer" containerID="13ddf602fbe134cab28fc4de76c7041107e20fe843d1ed86cd1df466f5d694ab" Feb 17 09:08:15 crc kubenswrapper[4813]: E0217 09:08:15.171815 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13ddf602fbe134cab28fc4de76c7041107e20fe843d1ed86cd1df466f5d694ab\": container with ID starting with 13ddf602fbe134cab28fc4de76c7041107e20fe843d1ed86cd1df466f5d694ab not found: ID does not exist" containerID="13ddf602fbe134cab28fc4de76c7041107e20fe843d1ed86cd1df466f5d694ab" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.171846 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ddf602fbe134cab28fc4de76c7041107e20fe843d1ed86cd1df466f5d694ab"} err="failed to get container status \"13ddf602fbe134cab28fc4de76c7041107e20fe843d1ed86cd1df466f5d694ab\": rpc error: code = NotFound desc = could not find container \"13ddf602fbe134cab28fc4de76c7041107e20fe843d1ed86cd1df466f5d694ab\": container with ID starting with 13ddf602fbe134cab28fc4de76c7041107e20fe843d1ed86cd1df466f5d694ab not found: ID does not exist" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.171867 4813 scope.go:117] "RemoveContainer" containerID="6dd4fe7abb619193aeeec9123cc8aefe8f7f57ccc764ef73bcfea3a08b190246" Feb 17 09:08:15 crc kubenswrapper[4813]: E0217 09:08:15.172213 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dd4fe7abb619193aeeec9123cc8aefe8f7f57ccc764ef73bcfea3a08b190246\": container with ID starting with 6dd4fe7abb619193aeeec9123cc8aefe8f7f57ccc764ef73bcfea3a08b190246 not found: ID does not exist" containerID="6dd4fe7abb619193aeeec9123cc8aefe8f7f57ccc764ef73bcfea3a08b190246" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.172235 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dd4fe7abb619193aeeec9123cc8aefe8f7f57ccc764ef73bcfea3a08b190246"} err="failed to get container status \"6dd4fe7abb619193aeeec9123cc8aefe8f7f57ccc764ef73bcfea3a08b190246\": rpc error: code = NotFound desc = could not find container \"6dd4fe7abb619193aeeec9123cc8aefe8f7f57ccc764ef73bcfea3a08b190246\": container with ID starting with 6dd4fe7abb619193aeeec9123cc8aefe8f7f57ccc764ef73bcfea3a08b190246 not found: ID does not exist" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.172248 4813 scope.go:117] "RemoveContainer" containerID="04b6311fd0f59cc421c53f2658dd9089189f5f36f7f46332c760ee092d6d2137" Feb 17 09:08:15 crc kubenswrapper[4813]: E0217 09:08:15.172440 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b6311fd0f59cc421c53f2658dd9089189f5f36f7f46332c760ee092d6d2137\": container with ID starting with 04b6311fd0f59cc421c53f2658dd9089189f5f36f7f46332c760ee092d6d2137 not found: ID does not exist" containerID="04b6311fd0f59cc421c53f2658dd9089189f5f36f7f46332c760ee092d6d2137" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.172463 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b6311fd0f59cc421c53f2658dd9089189f5f36f7f46332c760ee092d6d2137"} err="failed to get container status \"04b6311fd0f59cc421c53f2658dd9089189f5f36f7f46332c760ee092d6d2137\": rpc error: code = NotFound desc = could not find container \"04b6311fd0f59cc421c53f2658dd9089189f5f36f7f46332c760ee092d6d2137\": container with ID starting with 04b6311fd0f59cc421c53f2658dd9089189f5f36f7f46332c760ee092d6d2137 not found: ID does not exist" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.172474 4813 scope.go:117] "RemoveContainer" containerID="79fc34220dcf01aa5b0b912db1b7ecb4178fbaaaf5cd25532e64270002dee79a" Feb 17 09:08:15 crc kubenswrapper[4813]: E0217 09:08:15.172981 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79fc34220dcf01aa5b0b912db1b7ecb4178fbaaaf5cd25532e64270002dee79a\": container with ID starting with 79fc34220dcf01aa5b0b912db1b7ecb4178fbaaaf5cd25532e64270002dee79a not found: ID does not exist" containerID="79fc34220dcf01aa5b0b912db1b7ecb4178fbaaaf5cd25532e64270002dee79a" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.173003 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79fc34220dcf01aa5b0b912db1b7ecb4178fbaaaf5cd25532e64270002dee79a"} err="failed to get container status \"79fc34220dcf01aa5b0b912db1b7ecb4178fbaaaf5cd25532e64270002dee79a\": rpc error: code = NotFound desc = could not find container \"79fc34220dcf01aa5b0b912db1b7ecb4178fbaaaf5cd25532e64270002dee79a\": container with ID starting with 79fc34220dcf01aa5b0b912db1b7ecb4178fbaaaf5cd25532e64270002dee79a not found: ID does not exist" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.174186 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.174227 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-run-httpd\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.174246 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.174269 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-log-httpd\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.174346 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2wk9\" (UniqueName: \"kubernetes.io/projected/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-kube-api-access-r2wk9\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.174362 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-scripts\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.174390 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-config-data\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.174411 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: W0217 09:08:15.176719 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafd68cea_e70d_4575_83e6_5bf81ede6566.slice/crio-4a1388b48c14138ac8b98f2142b247909345e6b70735a1a17718d2eb02b4b3b1 WatchSource:0}: Error finding container 4a1388b48c14138ac8b98f2142b247909345e6b70735a1a17718d2eb02b4b3b1: Status 404 returned error can't find the container with id 4a1388b48c14138ac8b98f2142b247909345e6b70735a1a17718d2eb02b4b3b1 Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.275446 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2wk9\" (UniqueName: \"kubernetes.io/projected/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-kube-api-access-r2wk9\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.275709 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-scripts\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.275741 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-config-data\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.275761 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.275818 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.275840 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.275855 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-run-httpd\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.275877 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-log-httpd\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.276355 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-run-httpd\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.276541 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-log-httpd\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.300793 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-scripts\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.300840 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.300929 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.301235 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-config-data\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.301303 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.307066 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2wk9\" (UniqueName: \"kubernetes.io/projected/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-kube-api-access-r2wk9\") pod \"ceilometer-0\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.438446 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:15 crc kubenswrapper[4813]: I0217 09:08:15.973360 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:08:15 crc kubenswrapper[4813]: W0217 09:08:15.974501 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d56bd8e_3dc9_434c_afb0_94dcbc5e9982.slice/crio-70480e5320429323c1b388cf359f27d456a0a6538e7bfce514dde4ba6c2c571f WatchSource:0}: Error finding container 70480e5320429323c1b388cf359f27d456a0a6538e7bfce514dde4ba6c2c571f: Status 404 returned error can't find the container with id 70480e5320429323c1b388cf359f27d456a0a6538e7bfce514dde4ba6c2c571f Feb 17 09:08:16 crc kubenswrapper[4813]: I0217 09:08:16.049328 4813 generic.go:334] "Generic (PLEG): container finished" podID="16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9" containerID="a1f6c5382ccd3d2571ae05b670fc8f3c994ef621ed68697d569f318f514d0730" exitCode=0 Feb 17 09:08:16 crc kubenswrapper[4813]: I0217 09:08:16.049394 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-n8n86" event={"ID":"16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9","Type":"ContainerDied","Data":"a1f6c5382ccd3d2571ae05b670fc8f3c994ef621ed68697d569f318f514d0730"} Feb 17 09:08:16 crc kubenswrapper[4813]: I0217 09:08:16.050297 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982","Type":"ContainerStarted","Data":"70480e5320429323c1b388cf359f27d456a0a6538e7bfce514dde4ba6c2c571f"} Feb 17 09:08:16 crc kubenswrapper[4813]: I0217 09:08:16.051415 4813 generic.go:334] "Generic (PLEG): container finished" podID="afd68cea-e70d-4575-83e6-5bf81ede6566" containerID="07507a2aa1c71de5bf4db5efb4923151e226d4b882083028e01254b60dcdcef3" exitCode=0 Feb 17 09:08:16 crc kubenswrapper[4813]: I0217 09:08:16.051467 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg" event={"ID":"afd68cea-e70d-4575-83e6-5bf81ede6566","Type":"ContainerDied","Data":"07507a2aa1c71de5bf4db5efb4923151e226d4b882083028e01254b60dcdcef3"} Feb 17 09:08:16 crc kubenswrapper[4813]: I0217 09:08:16.051530 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg" event={"ID":"afd68cea-e70d-4575-83e6-5bf81ede6566","Type":"ContainerStarted","Data":"4a1388b48c14138ac8b98f2142b247909345e6b70735a1a17718d2eb02b4b3b1"} Feb 17 09:08:17 crc kubenswrapper[4813]: I0217 09:08:17.059511 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982","Type":"ContainerStarted","Data":"6c410d1e8de234c01f0bde9a5c8348aa250fb094899315c6beab58c77b440163"} Feb 17 09:08:17 crc kubenswrapper[4813]: I0217 09:08:17.111347 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:08:17 crc kubenswrapper[4813]: E0217 09:08:17.111580 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:08:17 crc kubenswrapper[4813]: I0217 09:08:17.573986 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg" Feb 17 09:08:17 crc kubenswrapper[4813]: I0217 09:08:17.586219 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-n8n86" Feb 17 09:08:17 crc kubenswrapper[4813]: I0217 09:08:17.610968 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl7rs\" (UniqueName: \"kubernetes.io/projected/afd68cea-e70d-4575-83e6-5bf81ede6566-kube-api-access-vl7rs\") pod \"afd68cea-e70d-4575-83e6-5bf81ede6566\" (UID: \"afd68cea-e70d-4575-83e6-5bf81ede6566\") " Feb 17 09:08:17 crc kubenswrapper[4813]: I0217 09:08:17.611193 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzcxw\" (UniqueName: \"kubernetes.io/projected/16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9-kube-api-access-jzcxw\") pod \"16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9\" (UID: \"16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9\") " Feb 17 09:08:17 crc kubenswrapper[4813]: I0217 09:08:17.611241 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afd68cea-e70d-4575-83e6-5bf81ede6566-operator-scripts\") pod \"afd68cea-e70d-4575-83e6-5bf81ede6566\" (UID: \"afd68cea-e70d-4575-83e6-5bf81ede6566\") " Feb 17 09:08:17 crc kubenswrapper[4813]: I0217 09:08:17.611315 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9-operator-scripts\") pod \"16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9\" (UID: \"16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9\") " Feb 17 09:08:17 crc kubenswrapper[4813]: I0217 09:08:17.614122 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9" (UID: "16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:08:17 crc kubenswrapper[4813]: I0217 09:08:17.614779 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd68cea-e70d-4575-83e6-5bf81ede6566-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afd68cea-e70d-4575-83e6-5bf81ede6566" (UID: "afd68cea-e70d-4575-83e6-5bf81ede6566"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:08:17 crc kubenswrapper[4813]: I0217 09:08:17.615620 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9-kube-api-access-jzcxw" (OuterVolumeSpecName: "kube-api-access-jzcxw") pod "16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9" (UID: "16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9"). InnerVolumeSpecName "kube-api-access-jzcxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:08:17 crc kubenswrapper[4813]: I0217 09:08:17.621744 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd68cea-e70d-4575-83e6-5bf81ede6566-kube-api-access-vl7rs" (OuterVolumeSpecName: "kube-api-access-vl7rs") pod "afd68cea-e70d-4575-83e6-5bf81ede6566" (UID: "afd68cea-e70d-4575-83e6-5bf81ede6566"). InnerVolumeSpecName "kube-api-access-vl7rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:08:17 crc kubenswrapper[4813]: I0217 09:08:17.712630 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzcxw\" (UniqueName: \"kubernetes.io/projected/16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9-kube-api-access-jzcxw\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:17 crc kubenswrapper[4813]: I0217 09:08:17.713007 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afd68cea-e70d-4575-83e6-5bf81ede6566-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:17 crc kubenswrapper[4813]: I0217 09:08:17.713018 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:17 crc kubenswrapper[4813]: I0217 09:08:17.713026 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl7rs\" (UniqueName: \"kubernetes.io/projected/afd68cea-e70d-4575-83e6-5bf81ede6566-kube-api-access-vl7rs\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:18 crc kubenswrapper[4813]: I0217 09:08:18.070231 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-n8n86" event={"ID":"16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9","Type":"ContainerDied","Data":"bef5d2299595b63b80bb2269ef1c26c1ac63a976a8b9786715326d07fed091fa"} Feb 17 09:08:18 crc kubenswrapper[4813]: I0217 09:08:18.070275 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bef5d2299595b63b80bb2269ef1c26c1ac63a976a8b9786715326d07fed091fa" Feb 17 09:08:18 crc kubenswrapper[4813]: I0217 09:08:18.070383 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-n8n86" Feb 17 09:08:18 crc kubenswrapper[4813]: I0217 09:08:18.073226 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982","Type":"ContainerStarted","Data":"d52048c451c31a7c018b3d4c9ba7406182a99700e6f2f928476f1d4769c942de"} Feb 17 09:08:18 crc kubenswrapper[4813]: I0217 09:08:18.075223 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg" event={"ID":"afd68cea-e70d-4575-83e6-5bf81ede6566","Type":"ContainerDied","Data":"4a1388b48c14138ac8b98f2142b247909345e6b70735a1a17718d2eb02b4b3b1"} Feb 17 09:08:18 crc kubenswrapper[4813]: I0217 09:08:18.075268 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a1388b48c14138ac8b98f2142b247909345e6b70735a1a17718d2eb02b4b3b1" Feb 17 09:08:18 crc kubenswrapper[4813]: I0217 09:08:18.075327 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.086241 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982","Type":"ContainerStarted","Data":"1190990538d672daeebb1da2fc08bd19f8d8f3235af7d7ad0bf59baa52f41ebb"} Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.666379 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-d5skl"] Feb 17 09:08:19 crc kubenswrapper[4813]: E0217 09:08:19.666980 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9" containerName="mariadb-database-create" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.666997 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9" containerName="mariadb-database-create" Feb 17 09:08:19 crc kubenswrapper[4813]: E0217 09:08:19.667023 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd68cea-e70d-4575-83e6-5bf81ede6566" containerName="mariadb-account-create-update" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.667030 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd68cea-e70d-4575-83e6-5bf81ede6566" containerName="mariadb-account-create-update" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.667178 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9" containerName="mariadb-database-create" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.667204 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd68cea-e70d-4575-83e6-5bf81ede6566" containerName="mariadb-account-create-update" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.667794 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.670412 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.670583 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-6xzxm" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.682853 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-d5skl"] Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.845049 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-config-data\") pod \"watcher-kuttl-db-sync-d5skl\" (UID: \"d5ac057e-5e86-48c3-b6f6-077d203c9659\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.845116 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djvsh\" (UniqueName: \"kubernetes.io/projected/d5ac057e-5e86-48c3-b6f6-077d203c9659-kube-api-access-djvsh\") pod \"watcher-kuttl-db-sync-d5skl\" (UID: \"d5ac057e-5e86-48c3-b6f6-077d203c9659\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.845149 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-d5skl\" (UID: \"d5ac057e-5e86-48c3-b6f6-077d203c9659\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.845222 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-db-sync-config-data\") pod \"watcher-kuttl-db-sync-d5skl\" (UID: \"d5ac057e-5e86-48c3-b6f6-077d203c9659\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.946723 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-config-data\") pod \"watcher-kuttl-db-sync-d5skl\" (UID: \"d5ac057e-5e86-48c3-b6f6-077d203c9659\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.946807 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djvsh\" (UniqueName: \"kubernetes.io/projected/d5ac057e-5e86-48c3-b6f6-077d203c9659-kube-api-access-djvsh\") pod \"watcher-kuttl-db-sync-d5skl\" (UID: \"d5ac057e-5e86-48c3-b6f6-077d203c9659\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.946853 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-d5skl\" (UID: \"d5ac057e-5e86-48c3-b6f6-077d203c9659\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.946967 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-db-sync-config-data\") pod \"watcher-kuttl-db-sync-d5skl\" (UID: \"d5ac057e-5e86-48c3-b6f6-077d203c9659\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.951709 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-config-data\") pod \"watcher-kuttl-db-sync-d5skl\" (UID: \"d5ac057e-5e86-48c3-b6f6-077d203c9659\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.953126 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-db-sync-config-data\") pod \"watcher-kuttl-db-sync-d5skl\" (UID: \"d5ac057e-5e86-48c3-b6f6-077d203c9659\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.954002 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-d5skl\" (UID: \"d5ac057e-5e86-48c3-b6f6-077d203c9659\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.970165 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djvsh\" (UniqueName: \"kubernetes.io/projected/d5ac057e-5e86-48c3-b6f6-077d203c9659-kube-api-access-djvsh\") pod \"watcher-kuttl-db-sync-d5skl\" (UID: \"d5ac057e-5e86-48c3-b6f6-077d203c9659\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" Feb 17 09:08:19 crc kubenswrapper[4813]: I0217 09:08:19.981360 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" Feb 17 09:08:20 crc kubenswrapper[4813]: I0217 09:08:20.103586 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982","Type":"ContainerStarted","Data":"815d7710380bf3500d881f4566f98532dcdf99c05557f9a2df60dca27de2b064"} Feb 17 09:08:20 crc kubenswrapper[4813]: I0217 09:08:20.103872 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:20 crc kubenswrapper[4813]: I0217 09:08:20.139962 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.6732348959999999 podStartE2EDuration="5.139933933s" podCreationTimestamp="2026-02-17 09:08:15 +0000 UTC" firstStartedPulling="2026-02-17 09:08:15.978087189 +0000 UTC m=+1643.638848412" lastFinishedPulling="2026-02-17 09:08:19.444786226 +0000 UTC m=+1647.105547449" observedRunningTime="2026-02-17 09:08:20.136219077 +0000 UTC m=+1647.796980310" watchObservedRunningTime="2026-02-17 09:08:20.139933933 +0000 UTC m=+1647.800695166" Feb 17 09:08:20 crc kubenswrapper[4813]: I0217 09:08:20.556882 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-d5skl"] Feb 17 09:08:21 crc kubenswrapper[4813]: I0217 09:08:21.127804 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" event={"ID":"d5ac057e-5e86-48c3-b6f6-077d203c9659","Type":"ContainerStarted","Data":"9a2323754e71e2b69d3eafbcc8cd70fc1a2050ac1fdc90edb02c37dfac8a4bdb"} Feb 17 09:08:21 crc kubenswrapper[4813]: I0217 09:08:21.128047 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" event={"ID":"d5ac057e-5e86-48c3-b6f6-077d203c9659","Type":"ContainerStarted","Data":"dfa8f1cbf5680054841f3b2c476aaa6fda4dd0b8e48c162bd0b153ea2f5d5f46"} Feb 17 09:08:21 crc kubenswrapper[4813]: I0217 09:08:21.161834 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" podStartSLOduration=2.161811582 podStartE2EDuration="2.161811582s" podCreationTimestamp="2026-02-17 09:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:08:21.140996786 +0000 UTC m=+1648.801758009" watchObservedRunningTime="2026-02-17 09:08:21.161811582 +0000 UTC m=+1648.822572805" Feb 17 09:08:23 crc kubenswrapper[4813]: I0217 09:08:23.143238 4813 generic.go:334] "Generic (PLEG): container finished" podID="d5ac057e-5e86-48c3-b6f6-077d203c9659" containerID="9a2323754e71e2b69d3eafbcc8cd70fc1a2050ac1fdc90edb02c37dfac8a4bdb" exitCode=0 Feb 17 09:08:23 crc kubenswrapper[4813]: I0217 09:08:23.143322 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" event={"ID":"d5ac057e-5e86-48c3-b6f6-077d203c9659","Type":"ContainerDied","Data":"9a2323754e71e2b69d3eafbcc8cd70fc1a2050ac1fdc90edb02c37dfac8a4bdb"} Feb 17 09:08:24 crc kubenswrapper[4813]: I0217 09:08:24.549654 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" Feb 17 09:08:24 crc kubenswrapper[4813]: I0217 09:08:24.639912 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-db-sync-config-data\") pod \"d5ac057e-5e86-48c3-b6f6-077d203c9659\" (UID: \"d5ac057e-5e86-48c3-b6f6-077d203c9659\") " Feb 17 09:08:24 crc kubenswrapper[4813]: I0217 09:08:24.639974 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djvsh\" (UniqueName: \"kubernetes.io/projected/d5ac057e-5e86-48c3-b6f6-077d203c9659-kube-api-access-djvsh\") pod \"d5ac057e-5e86-48c3-b6f6-077d203c9659\" (UID: \"d5ac057e-5e86-48c3-b6f6-077d203c9659\") " Feb 17 09:08:24 crc kubenswrapper[4813]: I0217 09:08:24.640012 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-config-data\") pod \"d5ac057e-5e86-48c3-b6f6-077d203c9659\" (UID: \"d5ac057e-5e86-48c3-b6f6-077d203c9659\") " Feb 17 09:08:24 crc kubenswrapper[4813]: I0217 09:08:24.640066 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-combined-ca-bundle\") pod \"d5ac057e-5e86-48c3-b6f6-077d203c9659\" (UID: \"d5ac057e-5e86-48c3-b6f6-077d203c9659\") " Feb 17 09:08:24 crc kubenswrapper[4813]: I0217 09:08:24.645443 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d5ac057e-5e86-48c3-b6f6-077d203c9659" (UID: "d5ac057e-5e86-48c3-b6f6-077d203c9659"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:24 crc kubenswrapper[4813]: I0217 09:08:24.648164 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ac057e-5e86-48c3-b6f6-077d203c9659-kube-api-access-djvsh" (OuterVolumeSpecName: "kube-api-access-djvsh") pod "d5ac057e-5e86-48c3-b6f6-077d203c9659" (UID: "d5ac057e-5e86-48c3-b6f6-077d203c9659"). InnerVolumeSpecName "kube-api-access-djvsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:08:24 crc kubenswrapper[4813]: I0217 09:08:24.674471 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5ac057e-5e86-48c3-b6f6-077d203c9659" (UID: "d5ac057e-5e86-48c3-b6f6-077d203c9659"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:24 crc kubenswrapper[4813]: I0217 09:08:24.687820 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-config-data" (OuterVolumeSpecName: "config-data") pod "d5ac057e-5e86-48c3-b6f6-077d203c9659" (UID: "d5ac057e-5e86-48c3-b6f6-077d203c9659"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:24 crc kubenswrapper[4813]: I0217 09:08:24.742611 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:24 crc kubenswrapper[4813]: I0217 09:08:24.742646 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djvsh\" (UniqueName: \"kubernetes.io/projected/d5ac057e-5e86-48c3-b6f6-077d203c9659-kube-api-access-djvsh\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:24 crc kubenswrapper[4813]: I0217 09:08:24.742658 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:24 crc kubenswrapper[4813]: I0217 09:08:24.742666 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac057e-5e86-48c3-b6f6-077d203c9659-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.166163 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" event={"ID":"d5ac057e-5e86-48c3-b6f6-077d203c9659","Type":"ContainerDied","Data":"dfa8f1cbf5680054841f3b2c476aaa6fda4dd0b8e48c162bd0b153ea2f5d5f46"} Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.166212 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfa8f1cbf5680054841f3b2c476aaa6fda4dd0b8e48c162bd0b153ea2f5d5f46" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.166287 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d5skl" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.462188 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:08:25 crc kubenswrapper[4813]: E0217 09:08:25.462736 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ac057e-5e86-48c3-b6f6-077d203c9659" containerName="watcher-kuttl-db-sync" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.462870 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ac057e-5e86-48c3-b6f6-077d203c9659" containerName="watcher-kuttl-db-sync" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.463087 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ac057e-5e86-48c3-b6f6-077d203c9659" containerName="watcher-kuttl-db-sync" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.463667 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.467173 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-6xzxm" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.468160 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.476775 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.554669 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.555476 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4csb8\" (UniqueName: \"kubernetes.io/projected/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-kube-api-access-4csb8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.555652 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.555771 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.555929 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.556038 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.561362 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.562315 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.580590 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.580772 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.625433 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.626801 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.628486 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.631812 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.662313 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.662391 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4csb8\" (UniqueName: \"kubernetes.io/projected/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-kube-api-access-4csb8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.662420 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.662451 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.662483 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.662512 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.662539 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4cm9\" (UniqueName: \"kubernetes.io/projected/c2c3932a-3be9-4161-b873-a8a905d7c639-kube-api-access-m4cm9\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.662563 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.662925 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.664645 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.664689 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.664859 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.664891 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.664935 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.664962 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2c3932a-3be9-4161-b873-a8a905d7c639-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.664986 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.665006 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.665071 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjbsm\" (UniqueName: \"kubernetes.io/projected/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-kube-api-access-gjbsm\") pod \"watcher-kuttl-applier-0\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.670296 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.676225 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.676717 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.678444 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4csb8\" (UniqueName: \"kubernetes.io/projected/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-kube-api-access-4csb8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.685164 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.766782 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4cm9\" (UniqueName: \"kubernetes.io/projected/c2c3932a-3be9-4161-b873-a8a905d7c639-kube-api-access-m4cm9\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.766846 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.766869 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.766899 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.766928 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.766959 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2c3932a-3be9-4161-b873-a8a905d7c639-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.766984 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.767016 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjbsm\" (UniqueName: \"kubernetes.io/projected/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-kube-api-access-gjbsm\") pod \"watcher-kuttl-applier-0\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.767060 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.767080 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.767106 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.767523 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.768123 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2c3932a-3be9-4161-b873-a8a905d7c639-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.777964 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.778778 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.782699 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.783047 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.783120 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.786745 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.795971 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.811880 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjbsm\" (UniqueName: \"kubernetes.io/projected/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-kube-api-access-gjbsm\") pod \"watcher-kuttl-applier-0\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.819845 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4cm9\" (UniqueName: \"kubernetes.io/projected/c2c3932a-3be9-4161-b873-a8a905d7c639-kube-api-access-m4cm9\") pod \"watcher-kuttl-api-0\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.829760 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.897504 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:25 crc kubenswrapper[4813]: I0217 09:08:25.946905 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:26 crc kubenswrapper[4813]: I0217 09:08:26.325116 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:08:26 crc kubenswrapper[4813]: I0217 09:08:26.417934 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:08:26 crc kubenswrapper[4813]: W0217 09:08:26.423649 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8c3c80f_dcca_441c_9d3b_a9b20a078b2f.slice/crio-06f834cedbda38210c8a09df50244bcd8934ff0e14d05f0c28cb4fbbf00b3a8b WatchSource:0}: Error finding container 06f834cedbda38210c8a09df50244bcd8934ff0e14d05f0c28cb4fbbf00b3a8b: Status 404 returned error can't find the container with id 06f834cedbda38210c8a09df50244bcd8934ff0e14d05f0c28cb4fbbf00b3a8b Feb 17 09:08:26 crc kubenswrapper[4813]: W0217 09:08:26.493236 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2c3932a_3be9_4161_b873_a8a905d7c639.slice/crio-11d893c77160f6aef14c7b90a1c9f71a5c2b3c72bbebc065b8c4cdcb003abc5e WatchSource:0}: Error finding container 11d893c77160f6aef14c7b90a1c9f71a5c2b3c72bbebc065b8c4cdcb003abc5e: Status 404 returned error can't find the container with id 11d893c77160f6aef14c7b90a1c9f71a5c2b3c72bbebc065b8c4cdcb003abc5e Feb 17 09:08:26 crc kubenswrapper[4813]: I0217 09:08:26.496247 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:08:27 crc kubenswrapper[4813]: I0217 09:08:27.182105 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f","Type":"ContainerStarted","Data":"ad51458a2103ce0886b2731c64e1675e3e4fb80670ea7978ab3494b474e3aedf"} Feb 17 09:08:27 crc kubenswrapper[4813]: I0217 09:08:27.182407 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f","Type":"ContainerStarted","Data":"06f834cedbda38210c8a09df50244bcd8934ff0e14d05f0c28cb4fbbf00b3a8b"} Feb 17 09:08:27 crc kubenswrapper[4813]: I0217 09:08:27.183698 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca","Type":"ContainerStarted","Data":"141f4df8998b8596000ac8fce9b20c10c8435beaa3667c18f7b87730d6608808"} Feb 17 09:08:27 crc kubenswrapper[4813]: I0217 09:08:27.183744 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca","Type":"ContainerStarted","Data":"43fb1c4cef43398f33019c17ef821c03f9dfe94faa9a6bd0260b8713d6a633d0"} Feb 17 09:08:27 crc kubenswrapper[4813]: I0217 09:08:27.185829 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c2c3932a-3be9-4161-b873-a8a905d7c639","Type":"ContainerStarted","Data":"a9a7ba3924ec3adf494ceee70cdbc828731839a691b733f2f60fad9aeae69812"} Feb 17 09:08:27 crc kubenswrapper[4813]: I0217 09:08:27.185870 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c2c3932a-3be9-4161-b873-a8a905d7c639","Type":"ContainerStarted","Data":"75bc40bdc7b1f376cbecb207bcf65260e9d02d628307ff22d3d370b07a1a8f23"} Feb 17 09:08:27 crc kubenswrapper[4813]: I0217 09:08:27.185906 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c2c3932a-3be9-4161-b873-a8a905d7c639","Type":"ContainerStarted","Data":"11d893c77160f6aef14c7b90a1c9f71a5c2b3c72bbebc065b8c4cdcb003abc5e"} Feb 17 09:08:27 crc kubenswrapper[4813]: I0217 09:08:27.186512 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:27 crc kubenswrapper[4813]: I0217 09:08:27.212405 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.212387928 podStartE2EDuration="2.212387928s" podCreationTimestamp="2026-02-17 09:08:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:08:27.206711896 +0000 UTC m=+1654.867473119" watchObservedRunningTime="2026-02-17 09:08:27.212387928 +0000 UTC m=+1654.873149151" Feb 17 09:08:27 crc kubenswrapper[4813]: I0217 09:08:27.237715 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.237699233 podStartE2EDuration="2.237699233s" podCreationTimestamp="2026-02-17 09:08:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:08:27.233614916 +0000 UTC m=+1654.894376139" watchObservedRunningTime="2026-02-17 09:08:27.237699233 +0000 UTC m=+1654.898460456" Feb 17 09:08:27 crc kubenswrapper[4813]: I0217 09:08:27.253500 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.253480045 podStartE2EDuration="2.253480045s" podCreationTimestamp="2026-02-17 09:08:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:08:27.250382956 +0000 UTC m=+1654.911144319" watchObservedRunningTime="2026-02-17 09:08:27.253480045 +0000 UTC m=+1654.914241268" Feb 17 09:08:27 crc kubenswrapper[4813]: I0217 09:08:27.917535 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:29 crc kubenswrapper[4813]: I0217 09:08:29.101958 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:29 crc kubenswrapper[4813]: I0217 09:08:29.367290 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:30 crc kubenswrapper[4813]: I0217 09:08:30.111067 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:08:30 crc kubenswrapper[4813]: E0217 09:08:30.111391 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:08:30 crc kubenswrapper[4813]: I0217 09:08:30.309654 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:30 crc kubenswrapper[4813]: I0217 09:08:30.899118 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:30 crc kubenswrapper[4813]: I0217 09:08:30.947679 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:31 crc kubenswrapper[4813]: I0217 09:08:31.531058 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:32 crc kubenswrapper[4813]: I0217 09:08:32.802036 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:33 crc kubenswrapper[4813]: I0217 09:08:33.975163 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:35 crc kubenswrapper[4813]: I0217 09:08:35.224845 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:35 crc kubenswrapper[4813]: I0217 09:08:35.830064 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:35 crc kubenswrapper[4813]: I0217 09:08:35.859727 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:35 crc kubenswrapper[4813]: I0217 09:08:35.898164 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:35 crc kubenswrapper[4813]: I0217 09:08:35.941529 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:35 crc kubenswrapper[4813]: I0217 09:08:35.947186 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:35 crc kubenswrapper[4813]: I0217 09:08:35.953504 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:36 crc kubenswrapper[4813]: I0217 09:08:36.278940 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:36 crc kubenswrapper[4813]: I0217 09:08:36.299637 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:08:36 crc kubenswrapper[4813]: I0217 09:08:36.332154 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:08:36 crc kubenswrapper[4813]: I0217 09:08:36.335745 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:08:36 crc kubenswrapper[4813]: I0217 09:08:36.469870 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:37 crc kubenswrapper[4813]: I0217 09:08:37.685662 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:37 crc kubenswrapper[4813]: I0217 09:08:37.828539 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:08:37 crc kubenswrapper[4813]: I0217 09:08:37.828892 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="ceilometer-central-agent" containerID="cri-o://6c410d1e8de234c01f0bde9a5c8348aa250fb094899315c6beab58c77b440163" gracePeriod=30 Feb 17 09:08:37 crc kubenswrapper[4813]: I0217 09:08:37.828927 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="sg-core" containerID="cri-o://1190990538d672daeebb1da2fc08bd19f8d8f3235af7d7ad0bf59baa52f41ebb" gracePeriod=30 Feb 17 09:08:37 crc kubenswrapper[4813]: I0217 09:08:37.829014 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="ceilometer-notification-agent" containerID="cri-o://d52048c451c31a7c018b3d4c9ba7406182a99700e6f2f928476f1d4769c942de" gracePeriod=30 Feb 17 09:08:37 crc kubenswrapper[4813]: I0217 09:08:37.829045 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="proxy-httpd" containerID="cri-o://815d7710380bf3500d881f4566f98532dcdf99c05557f9a2df60dca27de2b064" gracePeriod=30 Feb 17 09:08:37 crc kubenswrapper[4813]: I0217 09:08:37.836019 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.222:3000/\": read tcp 10.217.0.2:41142->10.217.0.222:3000: read: connection reset by peer" Feb 17 09:08:37 crc kubenswrapper[4813]: I0217 09:08:37.996102 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.186487 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-db-create-s5567"] Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.187682 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-s5567" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.196417 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-create-s5567"] Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.279705 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3253024-0605-4e0e-a476-d7947b0880ba-operator-scripts\") pod \"cinder-db-create-s5567\" (UID: \"f3253024-0605-4e0e-a476-d7947b0880ba\") " pod="watcher-kuttl-default/cinder-db-create-s5567" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.280103 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km8qd\" (UniqueName: \"kubernetes.io/projected/f3253024-0605-4e0e-a476-d7947b0880ba-kube-api-access-km8qd\") pod \"cinder-db-create-s5567\" (UID: \"f3253024-0605-4e0e-a476-d7947b0880ba\") " pod="watcher-kuttl-default/cinder-db-create-s5567" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.282399 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-7579-account-create-update-t76tx"] Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.283428 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-7579-account-create-update-t76tx" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.285624 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-db-secret" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.299411 4813 generic.go:334] "Generic (PLEG): container finished" podID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerID="815d7710380bf3500d881f4566f98532dcdf99c05557f9a2df60dca27de2b064" exitCode=0 Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.299450 4813 generic.go:334] "Generic (PLEG): container finished" podID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerID="1190990538d672daeebb1da2fc08bd19f8d8f3235af7d7ad0bf59baa52f41ebb" exitCode=2 Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.299462 4813 generic.go:334] "Generic (PLEG): container finished" podID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerID="6c410d1e8de234c01f0bde9a5c8348aa250fb094899315c6beab58c77b440163" exitCode=0 Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.299452 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982","Type":"ContainerDied","Data":"815d7710380bf3500d881f4566f98532dcdf99c05557f9a2df60dca27de2b064"} Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.299695 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982","Type":"ContainerDied","Data":"1190990538d672daeebb1da2fc08bd19f8d8f3235af7d7ad0bf59baa52f41ebb"} Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.299726 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982","Type":"ContainerDied","Data":"6c410d1e8de234c01f0bde9a5c8348aa250fb094899315c6beab58c77b440163"} Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.300895 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-7579-account-create-update-t76tx"] Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.381093 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgb4n\" (UniqueName: \"kubernetes.io/projected/135ce389-c973-4614-935e-7d88b1f4666c-kube-api-access-dgb4n\") pod \"cinder-7579-account-create-update-t76tx\" (UID: \"135ce389-c973-4614-935e-7d88b1f4666c\") " pod="watcher-kuttl-default/cinder-7579-account-create-update-t76tx" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.381359 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/135ce389-c973-4614-935e-7d88b1f4666c-operator-scripts\") pod \"cinder-7579-account-create-update-t76tx\" (UID: \"135ce389-c973-4614-935e-7d88b1f4666c\") " pod="watcher-kuttl-default/cinder-7579-account-create-update-t76tx" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.381725 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km8qd\" (UniqueName: \"kubernetes.io/projected/f3253024-0605-4e0e-a476-d7947b0880ba-kube-api-access-km8qd\") pod \"cinder-db-create-s5567\" (UID: \"f3253024-0605-4e0e-a476-d7947b0880ba\") " pod="watcher-kuttl-default/cinder-db-create-s5567" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.381772 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3253024-0605-4e0e-a476-d7947b0880ba-operator-scripts\") pod \"cinder-db-create-s5567\" (UID: \"f3253024-0605-4e0e-a476-d7947b0880ba\") " pod="watcher-kuttl-default/cinder-db-create-s5567" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.382387 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3253024-0605-4e0e-a476-d7947b0880ba-operator-scripts\") pod \"cinder-db-create-s5567\" (UID: \"f3253024-0605-4e0e-a476-d7947b0880ba\") " pod="watcher-kuttl-default/cinder-db-create-s5567" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.407019 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km8qd\" (UniqueName: \"kubernetes.io/projected/f3253024-0605-4e0e-a476-d7947b0880ba-kube-api-access-km8qd\") pod \"cinder-db-create-s5567\" (UID: \"f3253024-0605-4e0e-a476-d7947b0880ba\") " pod="watcher-kuttl-default/cinder-db-create-s5567" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.483024 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgb4n\" (UniqueName: \"kubernetes.io/projected/135ce389-c973-4614-935e-7d88b1f4666c-kube-api-access-dgb4n\") pod \"cinder-7579-account-create-update-t76tx\" (UID: \"135ce389-c973-4614-935e-7d88b1f4666c\") " pod="watcher-kuttl-default/cinder-7579-account-create-update-t76tx" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.483071 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/135ce389-c973-4614-935e-7d88b1f4666c-operator-scripts\") pod \"cinder-7579-account-create-update-t76tx\" (UID: \"135ce389-c973-4614-935e-7d88b1f4666c\") " pod="watcher-kuttl-default/cinder-7579-account-create-update-t76tx" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.483874 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/135ce389-c973-4614-935e-7d88b1f4666c-operator-scripts\") pod \"cinder-7579-account-create-update-t76tx\" (UID: \"135ce389-c973-4614-935e-7d88b1f4666c\") " pod="watcher-kuttl-default/cinder-7579-account-create-update-t76tx" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.501137 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-s5567" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.507547 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgb4n\" (UniqueName: \"kubernetes.io/projected/135ce389-c973-4614-935e-7d88b1f4666c-kube-api-access-dgb4n\") pod \"cinder-7579-account-create-update-t76tx\" (UID: \"135ce389-c973-4614-935e-7d88b1f4666c\") " pod="watcher-kuttl-default/cinder-7579-account-create-update-t76tx" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.597788 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-7579-account-create-update-t76tx" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.776947 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.894888 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-ceilometer-tls-certs\") pod \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.895003 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-log-httpd\") pod \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.895023 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-scripts\") pod \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.895047 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2wk9\" (UniqueName: \"kubernetes.io/projected/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-kube-api-access-r2wk9\") pod \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.895072 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-combined-ca-bundle\") pod \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.895095 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-sg-core-conf-yaml\") pod \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.895123 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-config-data\") pod \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.895159 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-run-httpd\") pod \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\" (UID: \"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982\") " Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.895568 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" (UID: "2d56bd8e-3dc9-434c-afb0-94dcbc5e9982"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.895754 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" (UID: "2d56bd8e-3dc9-434c-afb0-94dcbc5e9982"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.899415 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-scripts" (OuterVolumeSpecName: "scripts") pod "2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" (UID: "2d56bd8e-3dc9-434c-afb0-94dcbc5e9982"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.900252 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-kube-api-access-r2wk9" (OuterVolumeSpecName: "kube-api-access-r2wk9") pod "2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" (UID: "2d56bd8e-3dc9-434c-afb0-94dcbc5e9982"). InnerVolumeSpecName "kube-api-access-r2wk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.918346 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" (UID: "2d56bd8e-3dc9-434c-afb0-94dcbc5e9982"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.953008 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" (UID: "2d56bd8e-3dc9-434c-afb0-94dcbc5e9982"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.974486 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" (UID: "2d56bd8e-3dc9-434c-afb0-94dcbc5e9982"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.997334 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.997366 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.997375 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.997385 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2wk9\" (UniqueName: \"kubernetes.io/projected/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-kube-api-access-r2wk9\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.997394 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.997402 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:38 crc kubenswrapper[4813]: I0217 09:08:38.997411 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.003441 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-create-s5567"] Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.026495 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-config-data" (OuterVolumeSpecName: "config-data") pod "2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" (UID: "2d56bd8e-3dc9-434c-afb0-94dcbc5e9982"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.099279 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:39 crc kubenswrapper[4813]: W0217 09:08:39.153702 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod135ce389_c973_4614_935e_7d88b1f4666c.slice/crio-b5c9b27b4da2dd0e39139f0b986638ef3087efad79b9be751b7262e918e9b917 WatchSource:0}: Error finding container b5c9b27b4da2dd0e39139f0b986638ef3087efad79b9be751b7262e918e9b917: Status 404 returned error can't find the container with id b5c9b27b4da2dd0e39139f0b986638ef3087efad79b9be751b7262e918e9b917 Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.153835 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-7579-account-create-update-t76tx"] Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.277932 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.310778 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-7579-account-create-update-t76tx" event={"ID":"135ce389-c973-4614-935e-7d88b1f4666c","Type":"ContainerStarted","Data":"b5c9b27b4da2dd0e39139f0b986638ef3087efad79b9be751b7262e918e9b917"} Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.313430 4813 generic.go:334] "Generic (PLEG): container finished" podID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerID="d52048c451c31a7c018b3d4c9ba7406182a99700e6f2f928476f1d4769c942de" exitCode=0 Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.313474 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982","Type":"ContainerDied","Data":"d52048c451c31a7c018b3d4c9ba7406182a99700e6f2f928476f1d4769c942de"} Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.313493 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2d56bd8e-3dc9-434c-afb0-94dcbc5e9982","Type":"ContainerDied","Data":"70480e5320429323c1b388cf359f27d456a0a6538e7bfce514dde4ba6c2c571f"} Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.313508 4813 scope.go:117] "RemoveContainer" containerID="815d7710380bf3500d881f4566f98532dcdf99c05557f9a2df60dca27de2b064" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.313624 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.316669 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-s5567" event={"ID":"f3253024-0605-4e0e-a476-d7947b0880ba","Type":"ContainerStarted","Data":"4b99b785307454d1cd80c6a84f290f81c5e983866c0640308bccbe3c079c31f6"} Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.316691 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-s5567" event={"ID":"f3253024-0605-4e0e-a476-d7947b0880ba","Type":"ContainerStarted","Data":"e7b2e7ca30ee81bba9fb9ae59ce975daeb90a8160005dbf720851022cc4a14f7"} Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.337588 4813 scope.go:117] "RemoveContainer" containerID="1190990538d672daeebb1da2fc08bd19f8d8f3235af7d7ad0bf59baa52f41ebb" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.348722 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-db-create-s5567" podStartSLOduration=1.348697026 podStartE2EDuration="1.348697026s" podCreationTimestamp="2026-02-17 09:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:08:39.33207737 +0000 UTC m=+1666.992838593" watchObservedRunningTime="2026-02-17 09:08:39.348697026 +0000 UTC m=+1667.009458249" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.358266 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.364345 4813 scope.go:117] "RemoveContainer" containerID="d52048c451c31a7c018b3d4c9ba7406182a99700e6f2f928476f1d4769c942de" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.365433 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.380589 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:08:39 crc kubenswrapper[4813]: E0217 09:08:39.380924 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="ceilometer-central-agent" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.380938 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="ceilometer-central-agent" Feb 17 09:08:39 crc kubenswrapper[4813]: E0217 09:08:39.380957 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="proxy-httpd" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.380964 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="proxy-httpd" Feb 17 09:08:39 crc kubenswrapper[4813]: E0217 09:08:39.380979 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="ceilometer-notification-agent" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.380986 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="ceilometer-notification-agent" Feb 17 09:08:39 crc kubenswrapper[4813]: E0217 09:08:39.380994 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="sg-core" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.381001 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="sg-core" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.381137 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="ceilometer-central-agent" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.381151 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="sg-core" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.381169 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="ceilometer-notification-agent" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.381177 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" containerName="proxy-httpd" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.382392 4813 scope.go:117] "RemoveContainer" containerID="6c410d1e8de234c01f0bde9a5c8348aa250fb094899315c6beab58c77b440163" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.382572 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.385891 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.385980 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.385995 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.397897 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.406591 4813 scope.go:117] "RemoveContainer" containerID="815d7710380bf3500d881f4566f98532dcdf99c05557f9a2df60dca27de2b064" Feb 17 09:08:39 crc kubenswrapper[4813]: E0217 09:08:39.409017 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"815d7710380bf3500d881f4566f98532dcdf99c05557f9a2df60dca27de2b064\": container with ID starting with 815d7710380bf3500d881f4566f98532dcdf99c05557f9a2df60dca27de2b064 not found: ID does not exist" containerID="815d7710380bf3500d881f4566f98532dcdf99c05557f9a2df60dca27de2b064" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.409046 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"815d7710380bf3500d881f4566f98532dcdf99c05557f9a2df60dca27de2b064"} err="failed to get container status \"815d7710380bf3500d881f4566f98532dcdf99c05557f9a2df60dca27de2b064\": rpc error: code = NotFound desc = could not find container \"815d7710380bf3500d881f4566f98532dcdf99c05557f9a2df60dca27de2b064\": container with ID starting with 815d7710380bf3500d881f4566f98532dcdf99c05557f9a2df60dca27de2b064 not found: ID does not exist" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.409064 4813 scope.go:117] "RemoveContainer" containerID="1190990538d672daeebb1da2fc08bd19f8d8f3235af7d7ad0bf59baa52f41ebb" Feb 17 09:08:39 crc kubenswrapper[4813]: E0217 09:08:39.409844 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1190990538d672daeebb1da2fc08bd19f8d8f3235af7d7ad0bf59baa52f41ebb\": container with ID starting with 1190990538d672daeebb1da2fc08bd19f8d8f3235af7d7ad0bf59baa52f41ebb not found: ID does not exist" containerID="1190990538d672daeebb1da2fc08bd19f8d8f3235af7d7ad0bf59baa52f41ebb" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.409869 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1190990538d672daeebb1da2fc08bd19f8d8f3235af7d7ad0bf59baa52f41ebb"} err="failed to get container status \"1190990538d672daeebb1da2fc08bd19f8d8f3235af7d7ad0bf59baa52f41ebb\": rpc error: code = NotFound desc = could not find container \"1190990538d672daeebb1da2fc08bd19f8d8f3235af7d7ad0bf59baa52f41ebb\": container with ID starting with 1190990538d672daeebb1da2fc08bd19f8d8f3235af7d7ad0bf59baa52f41ebb not found: ID does not exist" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.409884 4813 scope.go:117] "RemoveContainer" containerID="d52048c451c31a7c018b3d4c9ba7406182a99700e6f2f928476f1d4769c942de" Feb 17 09:08:39 crc kubenswrapper[4813]: E0217 09:08:39.414525 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d52048c451c31a7c018b3d4c9ba7406182a99700e6f2f928476f1d4769c942de\": container with ID starting with d52048c451c31a7c018b3d4c9ba7406182a99700e6f2f928476f1d4769c942de not found: ID does not exist" containerID="d52048c451c31a7c018b3d4c9ba7406182a99700e6f2f928476f1d4769c942de" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.414561 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d52048c451c31a7c018b3d4c9ba7406182a99700e6f2f928476f1d4769c942de"} err="failed to get container status \"d52048c451c31a7c018b3d4c9ba7406182a99700e6f2f928476f1d4769c942de\": rpc error: code = NotFound desc = could not find container \"d52048c451c31a7c018b3d4c9ba7406182a99700e6f2f928476f1d4769c942de\": container with ID starting with d52048c451c31a7c018b3d4c9ba7406182a99700e6f2f928476f1d4769c942de not found: ID does not exist" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.414585 4813 scope.go:117] "RemoveContainer" containerID="6c410d1e8de234c01f0bde9a5c8348aa250fb094899315c6beab58c77b440163" Feb 17 09:08:39 crc kubenswrapper[4813]: E0217 09:08:39.417217 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c410d1e8de234c01f0bde9a5c8348aa250fb094899315c6beab58c77b440163\": container with ID starting with 6c410d1e8de234c01f0bde9a5c8348aa250fb094899315c6beab58c77b440163 not found: ID does not exist" containerID="6c410d1e8de234c01f0bde9a5c8348aa250fb094899315c6beab58c77b440163" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.417395 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c410d1e8de234c01f0bde9a5c8348aa250fb094899315c6beab58c77b440163"} err="failed to get container status \"6c410d1e8de234c01f0bde9a5c8348aa250fb094899315c6beab58c77b440163\": rpc error: code = NotFound desc = could not find container \"6c410d1e8de234c01f0bde9a5c8348aa250fb094899315c6beab58c77b440163\": container with ID starting with 6c410d1e8de234c01f0bde9a5c8348aa250fb094899315c6beab58c77b440163 not found: ID does not exist" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.507023 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-scripts\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.507073 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.507102 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmw77\" (UniqueName: \"kubernetes.io/projected/72cc6938-e3fd-4b74-ab51-81f30f7d7576-kube-api-access-dmw77\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.507133 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.507158 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72cc6938-e3fd-4b74-ab51-81f30f7d7576-log-httpd\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.507176 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.507214 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-config-data\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.507234 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72cc6938-e3fd-4b74-ab51-81f30f7d7576-run-httpd\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.608206 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-scripts\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.609201 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.609230 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmw77\" (UniqueName: \"kubernetes.io/projected/72cc6938-e3fd-4b74-ab51-81f30f7d7576-kube-api-access-dmw77\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.609266 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.609290 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72cc6938-e3fd-4b74-ab51-81f30f7d7576-log-httpd\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.609324 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.609377 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-config-data\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.609398 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72cc6938-e3fd-4b74-ab51-81f30f7d7576-run-httpd\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.609708 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72cc6938-e3fd-4b74-ab51-81f30f7d7576-run-httpd\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.609983 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72cc6938-e3fd-4b74-ab51-81f30f7d7576-log-httpd\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.612859 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.613057 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.614092 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-config-data\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.614769 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.614999 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-scripts\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.629238 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmw77\" (UniqueName: \"kubernetes.io/projected/72cc6938-e3fd-4b74-ab51-81f30f7d7576-kube-api-access-dmw77\") pod \"ceilometer-0\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:39 crc kubenswrapper[4813]: I0217 09:08:39.711116 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:40 crc kubenswrapper[4813]: I0217 09:08:40.231059 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:08:40 crc kubenswrapper[4813]: I0217 09:08:40.325495 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"72cc6938-e3fd-4b74-ab51-81f30f7d7576","Type":"ContainerStarted","Data":"ae7753c31831f1b6a0f952773a0c7b06a2cd8e37d6f459cebcab7102b3bcce83"} Feb 17 09:08:40 crc kubenswrapper[4813]: I0217 09:08:40.328383 4813 generic.go:334] "Generic (PLEG): container finished" podID="f3253024-0605-4e0e-a476-d7947b0880ba" containerID="4b99b785307454d1cd80c6a84f290f81c5e983866c0640308bccbe3c079c31f6" exitCode=0 Feb 17 09:08:40 crc kubenswrapper[4813]: I0217 09:08:40.328424 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-s5567" event={"ID":"f3253024-0605-4e0e-a476-d7947b0880ba","Type":"ContainerDied","Data":"4b99b785307454d1cd80c6a84f290f81c5e983866c0640308bccbe3c079c31f6"} Feb 17 09:08:40 crc kubenswrapper[4813]: I0217 09:08:40.330082 4813 generic.go:334] "Generic (PLEG): container finished" podID="135ce389-c973-4614-935e-7d88b1f4666c" containerID="f202b7b0291084e89d19f7f9d156ed7f94ba23677a17129108e8b7a45b647787" exitCode=0 Feb 17 09:08:40 crc kubenswrapper[4813]: I0217 09:08:40.330106 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-7579-account-create-update-t76tx" event={"ID":"135ce389-c973-4614-935e-7d88b1f4666c","Type":"ContainerDied","Data":"f202b7b0291084e89d19f7f9d156ed7f94ba23677a17129108e8b7a45b647787"} Feb 17 09:08:40 crc kubenswrapper[4813]: I0217 09:08:40.514409 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:41 crc kubenswrapper[4813]: I0217 09:08:41.111746 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:08:41 crc kubenswrapper[4813]: E0217 09:08:41.112726 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:08:41 crc kubenswrapper[4813]: I0217 09:08:41.128058 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d56bd8e-3dc9-434c-afb0-94dcbc5e9982" path="/var/lib/kubelet/pods/2d56bd8e-3dc9-434c-afb0-94dcbc5e9982/volumes" Feb 17 09:08:41 crc kubenswrapper[4813]: I0217 09:08:41.345485 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"72cc6938-e3fd-4b74-ab51-81f30f7d7576","Type":"ContainerStarted","Data":"5cb1371a8d9faf56792ef8ec18176a5de322f32c8f934e3a28a693578fdeb254"} Feb 17 09:08:41 crc kubenswrapper[4813]: I0217 09:08:41.785363 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:41 crc kubenswrapper[4813]: I0217 09:08:41.798347 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-7579-account-create-update-t76tx" Feb 17 09:08:41 crc kubenswrapper[4813]: I0217 09:08:41.826846 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-s5567" Feb 17 09:08:41 crc kubenswrapper[4813]: I0217 09:08:41.947361 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3253024-0605-4e0e-a476-d7947b0880ba-operator-scripts\") pod \"f3253024-0605-4e0e-a476-d7947b0880ba\" (UID: \"f3253024-0605-4e0e-a476-d7947b0880ba\") " Feb 17 09:08:41 crc kubenswrapper[4813]: I0217 09:08:41.947509 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/135ce389-c973-4614-935e-7d88b1f4666c-operator-scripts\") pod \"135ce389-c973-4614-935e-7d88b1f4666c\" (UID: \"135ce389-c973-4614-935e-7d88b1f4666c\") " Feb 17 09:08:41 crc kubenswrapper[4813]: I0217 09:08:41.947601 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgb4n\" (UniqueName: \"kubernetes.io/projected/135ce389-c973-4614-935e-7d88b1f4666c-kube-api-access-dgb4n\") pod \"135ce389-c973-4614-935e-7d88b1f4666c\" (UID: \"135ce389-c973-4614-935e-7d88b1f4666c\") " Feb 17 09:08:41 crc kubenswrapper[4813]: I0217 09:08:41.947632 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km8qd\" (UniqueName: \"kubernetes.io/projected/f3253024-0605-4e0e-a476-d7947b0880ba-kube-api-access-km8qd\") pod \"f3253024-0605-4e0e-a476-d7947b0880ba\" (UID: \"f3253024-0605-4e0e-a476-d7947b0880ba\") " Feb 17 09:08:41 crc kubenswrapper[4813]: I0217 09:08:41.948041 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3253024-0605-4e0e-a476-d7947b0880ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3253024-0605-4e0e-a476-d7947b0880ba" (UID: "f3253024-0605-4e0e-a476-d7947b0880ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:08:41 crc kubenswrapper[4813]: I0217 09:08:41.948399 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/135ce389-c973-4614-935e-7d88b1f4666c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "135ce389-c973-4614-935e-7d88b1f4666c" (UID: "135ce389-c973-4614-935e-7d88b1f4666c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:08:41 crc kubenswrapper[4813]: I0217 09:08:41.952480 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/135ce389-c973-4614-935e-7d88b1f4666c-kube-api-access-dgb4n" (OuterVolumeSpecName: "kube-api-access-dgb4n") pod "135ce389-c973-4614-935e-7d88b1f4666c" (UID: "135ce389-c973-4614-935e-7d88b1f4666c"). InnerVolumeSpecName "kube-api-access-dgb4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:08:41 crc kubenswrapper[4813]: I0217 09:08:41.952543 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3253024-0605-4e0e-a476-d7947b0880ba-kube-api-access-km8qd" (OuterVolumeSpecName: "kube-api-access-km8qd") pod "f3253024-0605-4e0e-a476-d7947b0880ba" (UID: "f3253024-0605-4e0e-a476-d7947b0880ba"). InnerVolumeSpecName "kube-api-access-km8qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:08:42 crc kubenswrapper[4813]: I0217 09:08:42.050331 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgb4n\" (UniqueName: \"kubernetes.io/projected/135ce389-c973-4614-935e-7d88b1f4666c-kube-api-access-dgb4n\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:42 crc kubenswrapper[4813]: I0217 09:08:42.050382 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km8qd\" (UniqueName: \"kubernetes.io/projected/f3253024-0605-4e0e-a476-d7947b0880ba-kube-api-access-km8qd\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:42 crc kubenswrapper[4813]: I0217 09:08:42.050405 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3253024-0605-4e0e-a476-d7947b0880ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:42 crc kubenswrapper[4813]: I0217 09:08:42.050423 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/135ce389-c973-4614-935e-7d88b1f4666c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:08:42 crc kubenswrapper[4813]: I0217 09:08:42.353792 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-s5567" event={"ID":"f3253024-0605-4e0e-a476-d7947b0880ba","Type":"ContainerDied","Data":"e7b2e7ca30ee81bba9fb9ae59ce975daeb90a8160005dbf720851022cc4a14f7"} Feb 17 09:08:42 crc kubenswrapper[4813]: I0217 09:08:42.354079 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7b2e7ca30ee81bba9fb9ae59ce975daeb90a8160005dbf720851022cc4a14f7" Feb 17 09:08:42 crc kubenswrapper[4813]: I0217 09:08:42.354072 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-s5567" Feb 17 09:08:42 crc kubenswrapper[4813]: I0217 09:08:42.356984 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-7579-account-create-update-t76tx" event={"ID":"135ce389-c973-4614-935e-7d88b1f4666c","Type":"ContainerDied","Data":"b5c9b27b4da2dd0e39139f0b986638ef3087efad79b9be751b7262e918e9b917"} Feb 17 09:08:42 crc kubenswrapper[4813]: I0217 09:08:42.357011 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5c9b27b4da2dd0e39139f0b986638ef3087efad79b9be751b7262e918e9b917" Feb 17 09:08:42 crc kubenswrapper[4813]: I0217 09:08:42.357067 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-7579-account-create-update-t76tx" Feb 17 09:08:42 crc kubenswrapper[4813]: I0217 09:08:42.365800 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"72cc6938-e3fd-4b74-ab51-81f30f7d7576","Type":"ContainerStarted","Data":"6997bd25bf92978c8c81c7b5541e4c9cba576bc3b487003c68bf8c8dc106fef1"} Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.063498 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.376162 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"72cc6938-e3fd-4b74-ab51-81f30f7d7576","Type":"ContainerStarted","Data":"50801ee879b22a4fd35dd70c92f2e40349cb919273da2360d1934d3d01884970"} Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.602395 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-db-sync-5w2sh"] Feb 17 09:08:43 crc kubenswrapper[4813]: E0217 09:08:43.602710 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3253024-0605-4e0e-a476-d7947b0880ba" containerName="mariadb-database-create" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.602725 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3253024-0605-4e0e-a476-d7947b0880ba" containerName="mariadb-database-create" Feb 17 09:08:43 crc kubenswrapper[4813]: E0217 09:08:43.602740 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135ce389-c973-4614-935e-7d88b1f4666c" containerName="mariadb-account-create-update" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.602746 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="135ce389-c973-4614-935e-7d88b1f4666c" containerName="mariadb-account-create-update" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.602902 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="135ce389-c973-4614-935e-7d88b1f4666c" containerName="mariadb-account-create-update" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.602919 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3253024-0605-4e0e-a476-d7947b0880ba" containerName="mariadb-database-create" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.603506 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.605856 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-cinder-dockercfg-4fw6v" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.606056 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-config-data" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.607403 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scripts" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.612169 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-5w2sh"] Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.674495 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-db-sync-config-data\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.674563 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-combined-ca-bundle\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.674607 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-scripts\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.674629 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80820ffe-8422-4c04-b626-92f1d9f8e982-etc-machine-id\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.674687 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-config-data\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.674733 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rdkb\" (UniqueName: \"kubernetes.io/projected/80820ffe-8422-4c04-b626-92f1d9f8e982-kube-api-access-6rdkb\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.775750 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-db-sync-config-data\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.775815 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-combined-ca-bundle\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.775861 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-scripts\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.775889 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80820ffe-8422-4c04-b626-92f1d9f8e982-etc-machine-id\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.775970 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-config-data\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.776025 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rdkb\" (UniqueName: \"kubernetes.io/projected/80820ffe-8422-4c04-b626-92f1d9f8e982-kube-api-access-6rdkb\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.776516 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80820ffe-8422-4c04-b626-92f1d9f8e982-etc-machine-id\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.779178 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-scripts\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.780399 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-db-sync-config-data\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.780806 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-combined-ca-bundle\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.781156 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-config-data\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.794202 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rdkb\" (UniqueName: \"kubernetes.io/projected/80820ffe-8422-4c04-b626-92f1d9f8e982-kube-api-access-6rdkb\") pod \"cinder-db-sync-5w2sh\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:43 crc kubenswrapper[4813]: I0217 09:08:43.917546 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:08:44 crc kubenswrapper[4813]: I0217 09:08:44.220703 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:44 crc kubenswrapper[4813]: I0217 09:08:44.385623 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"72cc6938-e3fd-4b74-ab51-81f30f7d7576","Type":"ContainerStarted","Data":"e2031d8e184e8725ec83fe251134da09a426ff1e41b7d4a3cba46887b033d449"} Feb 17 09:08:44 crc kubenswrapper[4813]: I0217 09:08:44.385781 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:08:44 crc kubenswrapper[4813]: W0217 09:08:44.409515 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80820ffe_8422_4c04_b626_92f1d9f8e982.slice/crio-b69df24781b98d1959711f3b19aca22c71ea2a8de4d0d7eaecb4e11761b9bc35 WatchSource:0}: Error finding container b69df24781b98d1959711f3b19aca22c71ea2a8de4d0d7eaecb4e11761b9bc35: Status 404 returned error can't find the container with id b69df24781b98d1959711f3b19aca22c71ea2a8de4d0d7eaecb4e11761b9bc35 Feb 17 09:08:44 crc kubenswrapper[4813]: I0217 09:08:44.412628 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-5w2sh"] Feb 17 09:08:44 crc kubenswrapper[4813]: I0217 09:08:44.433491 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.043939929 podStartE2EDuration="5.433471718s" podCreationTimestamp="2026-02-17 09:08:39 +0000 UTC" firstStartedPulling="2026-02-17 09:08:40.221675813 +0000 UTC m=+1667.882437036" lastFinishedPulling="2026-02-17 09:08:43.611207602 +0000 UTC m=+1671.271968825" observedRunningTime="2026-02-17 09:08:44.424405158 +0000 UTC m=+1672.085166381" watchObservedRunningTime="2026-02-17 09:08:44.433471718 +0000 UTC m=+1672.094232941" Feb 17 09:08:45 crc kubenswrapper[4813]: I0217 09:08:45.401481 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-5w2sh" event={"ID":"80820ffe-8422-4c04-b626-92f1d9f8e982","Type":"ContainerStarted","Data":"b69df24781b98d1959711f3b19aca22c71ea2a8de4d0d7eaecb4e11761b9bc35"} Feb 17 09:08:45 crc kubenswrapper[4813]: I0217 09:08:45.465777 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:46 crc kubenswrapper[4813]: I0217 09:08:46.088539 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-create-std5j"] Feb 17 09:08:46 crc kubenswrapper[4813]: I0217 09:08:46.111459 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-d726-account-create-update-8bbvg"] Feb 17 09:08:46 crc kubenswrapper[4813]: I0217 09:08:46.125636 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/root-account-create-update-djd62"] Feb 17 09:08:46 crc kubenswrapper[4813]: I0217 09:08:46.132542 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-d726-account-create-update-8bbvg"] Feb 17 09:08:46 crc kubenswrapper[4813]: I0217 09:08:46.138515 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-create-std5j"] Feb 17 09:08:46 crc kubenswrapper[4813]: I0217 09:08:46.144513 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/root-account-create-update-djd62"] Feb 17 09:08:46 crc kubenswrapper[4813]: I0217 09:08:46.711422 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:47 crc kubenswrapper[4813]: I0217 09:08:47.121015 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5007a7b4-94ab-4c00-ba51-bf73132dfbfa" path="/var/lib/kubelet/pods/5007a7b4-94ab-4c00-ba51-bf73132dfbfa/volumes" Feb 17 09:08:47 crc kubenswrapper[4813]: I0217 09:08:47.121570 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57131d8c-9d5e-492b-9141-43913e805dd1" path="/var/lib/kubelet/pods/57131d8c-9d5e-492b-9141-43913e805dd1/volumes" Feb 17 09:08:47 crc kubenswrapper[4813]: I0217 09:08:47.122067 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6" path="/var/lib/kubelet/pods/dfdebfb5-6d89-427e-8f3f-2d41f48fa1c6/volumes" Feb 17 09:08:47 crc kubenswrapper[4813]: I0217 09:08:47.926354 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:49 crc kubenswrapper[4813]: I0217 09:08:49.106737 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:50 crc kubenswrapper[4813]: I0217 09:08:50.332989 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:51 crc kubenswrapper[4813]: I0217 09:08:51.518707 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:53 crc kubenswrapper[4813]: I0217 09:08:53.005243 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:53 crc kubenswrapper[4813]: I0217 09:08:53.121487 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:08:53 crc kubenswrapper[4813]: E0217 09:08:53.121752 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:08:54 crc kubenswrapper[4813]: I0217 09:08:54.245417 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:55 crc kubenswrapper[4813]: I0217 09:08:55.297956 4813 scope.go:117] "RemoveContainer" containerID="f70ad92c893a59fed058461e103f14572a962ec5580c8cfe9df5a95697ecf0e7" Feb 17 09:08:55 crc kubenswrapper[4813]: I0217 09:08:55.506650 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:56 crc kubenswrapper[4813]: I0217 09:08:56.679650 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:57 crc kubenswrapper[4813]: I0217 09:08:57.855326 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:08:59 crc kubenswrapper[4813]: I0217 09:08:59.055178 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:00 crc kubenswrapper[4813]: I0217 09:09:00.244769 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:00 crc kubenswrapper[4813]: I0217 09:09:00.590032 4813 scope.go:117] "RemoveContainer" containerID="3504c6795c1b6df62b83f757073879b3443b391862b0b9332bd99d8b38bb261c" Feb 17 09:09:00 crc kubenswrapper[4813]: E0217 09:09:00.625319 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 17 09:09:00 crc kubenswrapper[4813]: E0217 09:09:00.625908 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rdkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-5w2sh_watcher-kuttl-default(80820ffe-8422-4c04-b626-92f1d9f8e982): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 09:09:00 crc kubenswrapper[4813]: E0217 09:09:00.627409 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/cinder-db-sync-5w2sh" podUID="80820ffe-8422-4c04-b626-92f1d9f8e982" Feb 17 09:09:00 crc kubenswrapper[4813]: I0217 09:09:00.636058 4813 scope.go:117] "RemoveContainer" containerID="3b35b8326d5924650137295748e1edcdd5cf96d0becaa2462775875bea40c42f" Feb 17 09:09:00 crc kubenswrapper[4813]: I0217 09:09:00.676943 4813 scope.go:117] "RemoveContainer" containerID="44fb9bc493af8b1137a41097d3f6fcc2dae062733ea721bcd96c23e5ad5675b2" Feb 17 09:09:00 crc kubenswrapper[4813]: I0217 09:09:00.719292 4813 scope.go:117] "RemoveContainer" containerID="06491395c0afe5849882a1901e32892d7c9b748fbef264396b45f0193bdbeb20" Feb 17 09:09:00 crc kubenswrapper[4813]: I0217 09:09:00.781397 4813 scope.go:117] "RemoveContainer" containerID="620961d7dfab08a80429352b9d1bc32c920166f4ee6488e2498ee81062095dcc" Feb 17 09:09:00 crc kubenswrapper[4813]: I0217 09:09:00.806263 4813 scope.go:117] "RemoveContainer" containerID="4d8c294646491d315b2686d7b0a80b424d3a771a7e95971da346bb1287c9132c" Feb 17 09:09:00 crc kubenswrapper[4813]: I0217 09:09:00.826690 4813 scope.go:117] "RemoveContainer" containerID="fd106661bd624754f10345ddc8c6c2374aa13b0d887a76b9159c9c361ff5097a" Feb 17 09:09:00 crc kubenswrapper[4813]: I0217 09:09:00.846469 4813 scope.go:117] "RemoveContainer" containerID="95689a9723cdb14ac451e4792ed0a05199f19d2734acf96aa3ece8edb9b27d60" Feb 17 09:09:00 crc kubenswrapper[4813]: I0217 09:09:00.869486 4813 scope.go:117] "RemoveContainer" containerID="66842e66f6758f0d04f1033c619d0108c4eb52ea3900d5881b0cddfa6f81d694" Feb 17 09:09:00 crc kubenswrapper[4813]: I0217 09:09:00.893036 4813 scope.go:117] "RemoveContainer" containerID="330f22faf9a8b981cb08ada4828a3f84bc38b8ff49dbbd7c8779600d60ba0079" Feb 17 09:09:01 crc kubenswrapper[4813]: I0217 09:09:01.508018 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:01 crc kubenswrapper[4813]: E0217 09:09:01.550603 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="watcher-kuttl-default/cinder-db-sync-5w2sh" podUID="80820ffe-8422-4c04-b626-92f1d9f8e982" Feb 17 09:09:02 crc kubenswrapper[4813]: I0217 09:09:02.671850 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:03 crc kubenswrapper[4813]: I0217 09:09:03.879683 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:05 crc kubenswrapper[4813]: I0217 09:09:05.067659 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:06 crc kubenswrapper[4813]: I0217 09:09:06.280433 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:07 crc kubenswrapper[4813]: I0217 09:09:07.517427 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:08 crc kubenswrapper[4813]: I0217 09:09:08.111112 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:09:08 crc kubenswrapper[4813]: E0217 09:09:08.111641 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:09:08 crc kubenswrapper[4813]: I0217 09:09:08.712158 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:09 crc kubenswrapper[4813]: I0217 09:09:09.728891 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:09 crc kubenswrapper[4813]: I0217 09:09:09.983288 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:11 crc kubenswrapper[4813]: I0217 09:09:11.218171 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:12 crc kubenswrapper[4813]: I0217 09:09:12.477832 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:13 crc kubenswrapper[4813]: I0217 09:09:13.678565 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:14 crc kubenswrapper[4813]: I0217 09:09:14.873244 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:15 crc kubenswrapper[4813]: I0217 09:09:15.672461 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-5w2sh" event={"ID":"80820ffe-8422-4c04-b626-92f1d9f8e982","Type":"ContainerStarted","Data":"3285e6e0aff4a07554ffa6b76f4593014c0168a914dc06a48ccaa32dcb5317d8"} Feb 17 09:09:15 crc kubenswrapper[4813]: I0217 09:09:15.694033 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-db-sync-5w2sh" podStartSLOduration=2.477390499 podStartE2EDuration="32.69400956s" podCreationTimestamp="2026-02-17 09:08:43 +0000 UTC" firstStartedPulling="2026-02-17 09:08:44.411415456 +0000 UTC m=+1672.072176669" lastFinishedPulling="2026-02-17 09:09:14.628034497 +0000 UTC m=+1702.288795730" observedRunningTime="2026-02-17 09:09:15.685470147 +0000 UTC m=+1703.346231380" watchObservedRunningTime="2026-02-17 09:09:15.69400956 +0000 UTC m=+1703.354770783" Feb 17 09:09:16 crc kubenswrapper[4813]: I0217 09:09:16.078021 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:17 crc kubenswrapper[4813]: I0217 09:09:17.305052 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:18 crc kubenswrapper[4813]: I0217 09:09:18.494738 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:19 crc kubenswrapper[4813]: I0217 09:09:19.111278 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:09:19 crc kubenswrapper[4813]: E0217 09:09:19.111730 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:09:19 crc kubenswrapper[4813]: I0217 09:09:19.666225 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:20 crc kubenswrapper[4813]: I0217 09:09:20.716422 4813 generic.go:334] "Generic (PLEG): container finished" podID="80820ffe-8422-4c04-b626-92f1d9f8e982" containerID="3285e6e0aff4a07554ffa6b76f4593014c0168a914dc06a48ccaa32dcb5317d8" exitCode=0 Feb 17 09:09:20 crc kubenswrapper[4813]: I0217 09:09:20.716517 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-5w2sh" event={"ID":"80820ffe-8422-4c04-b626-92f1d9f8e982","Type":"ContainerDied","Data":"3285e6e0aff4a07554ffa6b76f4593014c0168a914dc06a48ccaa32dcb5317d8"} Feb 17 09:09:20 crc kubenswrapper[4813]: I0217 09:09:20.859416 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.047597 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.080873 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.201096 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-scripts\") pod \"80820ffe-8422-4c04-b626-92f1d9f8e982\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.201398 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80820ffe-8422-4c04-b626-92f1d9f8e982-etc-machine-id\") pod \"80820ffe-8422-4c04-b626-92f1d9f8e982\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.201476 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-config-data\") pod \"80820ffe-8422-4c04-b626-92f1d9f8e982\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.201518 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-db-sync-config-data\") pod \"80820ffe-8422-4c04-b626-92f1d9f8e982\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.201562 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-combined-ca-bundle\") pod \"80820ffe-8422-4c04-b626-92f1d9f8e982\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.201553 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80820ffe-8422-4c04-b626-92f1d9f8e982-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "80820ffe-8422-4c04-b626-92f1d9f8e982" (UID: "80820ffe-8422-4c04-b626-92f1d9f8e982"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.201584 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rdkb\" (UniqueName: \"kubernetes.io/projected/80820ffe-8422-4c04-b626-92f1d9f8e982-kube-api-access-6rdkb\") pod \"80820ffe-8422-4c04-b626-92f1d9f8e982\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.201933 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/80820ffe-8422-4c04-b626-92f1d9f8e982-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.207068 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-scripts" (OuterVolumeSpecName: "scripts") pod "80820ffe-8422-4c04-b626-92f1d9f8e982" (UID: "80820ffe-8422-4c04-b626-92f1d9f8e982"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.207328 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80820ffe-8422-4c04-b626-92f1d9f8e982-kube-api-access-6rdkb" (OuterVolumeSpecName: "kube-api-access-6rdkb") pod "80820ffe-8422-4c04-b626-92f1d9f8e982" (UID: "80820ffe-8422-4c04-b626-92f1d9f8e982"). InnerVolumeSpecName "kube-api-access-6rdkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.209465 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "80820ffe-8422-4c04-b626-92f1d9f8e982" (UID: "80820ffe-8422-4c04-b626-92f1d9f8e982"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:22 crc kubenswrapper[4813]: E0217 09:09:22.245616 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-combined-ca-bundle podName:80820ffe-8422-4c04-b626-92f1d9f8e982 nodeName:}" failed. No retries permitted until 2026-02-17 09:09:22.745584117 +0000 UTC m=+1710.406345360 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-combined-ca-bundle") pod "80820ffe-8422-4c04-b626-92f1d9f8e982" (UID: "80820ffe-8422-4c04-b626-92f1d9f8e982") : error deleting /var/lib/kubelet/pods/80820ffe-8422-4c04-b626-92f1d9f8e982/volume-subpaths: remove /var/lib/kubelet/pods/80820ffe-8422-4c04-b626-92f1d9f8e982/volume-subpaths: no such file or directory Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.247981 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-config-data" (OuterVolumeSpecName: "config-data") pod "80820ffe-8422-4c04-b626-92f1d9f8e982" (UID: "80820ffe-8422-4c04-b626-92f1d9f8e982"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.303616 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rdkb\" (UniqueName: \"kubernetes.io/projected/80820ffe-8422-4c04-b626-92f1d9f8e982-kube-api-access-6rdkb\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.303904 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.304020 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.304114 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.734650 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-5w2sh" event={"ID":"80820ffe-8422-4c04-b626-92f1d9f8e982","Type":"ContainerDied","Data":"b69df24781b98d1959711f3b19aca22c71ea2a8de4d0d7eaecb4e11761b9bc35"} Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.734686 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69df24781b98d1959711f3b19aca22c71ea2a8de4d0d7eaecb4e11761b9bc35" Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.734733 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-5w2sh" Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.811033 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-combined-ca-bundle\") pod \"80820ffe-8422-4c04-b626-92f1d9f8e982\" (UID: \"80820ffe-8422-4c04-b626-92f1d9f8e982\") " Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.816480 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80820ffe-8422-4c04-b626-92f1d9f8e982" (UID: "80820ffe-8422-4c04-b626-92f1d9f8e982"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:22 crc kubenswrapper[4813]: I0217 09:09:22.913570 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80820ffe-8422-4c04-b626-92f1d9f8e982-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.075667 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 17 09:09:23 crc kubenswrapper[4813]: E0217 09:09:23.076077 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80820ffe-8422-4c04-b626-92f1d9f8e982" containerName="cinder-db-sync" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.076092 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="80820ffe-8422-4c04-b626-92f1d9f8e982" containerName="cinder-db-sync" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.076300 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="80820ffe-8422-4c04-b626-92f1d9f8e982" containerName="cinder-db-sync" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.077179 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.079898 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scripts" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.080188 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-config-data" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.080403 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scheduler-config-data" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.080630 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-cinder-dockercfg-4fw6v" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.093552 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.136870 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.138064 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.143377 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.144196 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-backup-config-data" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.182363 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.183857 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.185842 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-api-config-data" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.197382 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.220372 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5770a2b6-7618-4e7c-b57b-92ed0540fea7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.220425 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.220470 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-nvme\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.220499 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-dev\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.220522 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-sys\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.220552 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l2gs\" (UniqueName: \"kubernetes.io/projected/5770a2b6-7618-4e7c-b57b-92ed0540fea7-kube-api-access-6l2gs\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.220574 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-scripts\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.220614 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.220689 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgtzw\" (UniqueName: \"kubernetes.io/projected/24bfaf69-146c-495b-bfe4-2c14543c64ba-kube-api-access-kgtzw\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.220750 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.220795 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-config-data-custom\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.220821 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-scripts\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.220843 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.220867 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.220915 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.220995 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.221026 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.221079 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-run\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.221107 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-config-data\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.221158 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.221191 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-config-data\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.221232 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.221254 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-lib-modules\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.274389 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.322905 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.322945 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/678e07d8-2ac8-4504-8834-685bc3a4ecfd-logs\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.322967 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-lib-modules\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.322986 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5770a2b6-7618-4e7c-b57b-92ed0540fea7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323000 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323013 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-nvme\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323035 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-dev\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323053 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-sys\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323073 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l2gs\" (UniqueName: \"kubernetes.io/projected/5770a2b6-7618-4e7c-b57b-92ed0540fea7-kube-api-access-6l2gs\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323087 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-scripts\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323107 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323128 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323143 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-scripts\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323159 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/678e07d8-2ac8-4504-8834-685bc3a4ecfd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323176 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgtzw\" (UniqueName: \"kubernetes.io/projected/24bfaf69-146c-495b-bfe4-2c14543c64ba-kube-api-access-kgtzw\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323195 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323212 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323230 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-config-data-custom\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323245 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-scripts\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323259 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323273 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323298 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323339 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323352 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323375 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-run\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323395 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hxv4\" (UniqueName: \"kubernetes.io/projected/678e07d8-2ac8-4504-8834-685bc3a4ecfd-kube-api-access-9hxv4\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323412 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-config-data\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323435 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-config-data-custom\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323456 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323475 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-config-data\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.323492 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-config-data\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.327123 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.327202 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-run\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.327270 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.330241 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-config-data\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.330390 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.330441 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-sys\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.330497 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-lib-modules\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.330521 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5770a2b6-7618-4e7c-b57b-92ed0540fea7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.330541 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.330596 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-nvme\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.330619 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-dev\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.332122 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.332831 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-scripts\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.333027 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.335545 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.336077 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.336222 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-config-data-custom\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.337086 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-scripts\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.337149 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-config-data\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.339089 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.339905 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.349793 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgtzw\" (UniqueName: \"kubernetes.io/projected/24bfaf69-146c-495b-bfe4-2c14543c64ba-kube-api-access-kgtzw\") pod \"cinder-backup-0\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.355936 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l2gs\" (UniqueName: \"kubernetes.io/projected/5770a2b6-7618-4e7c-b57b-92ed0540fea7-kube-api-access-6l2gs\") pod \"cinder-scheduler-0\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.393022 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.425485 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hxv4\" (UniqueName: \"kubernetes.io/projected/678e07d8-2ac8-4504-8834-685bc3a4ecfd-kube-api-access-9hxv4\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.425737 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-config-data-custom\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.425838 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-config-data\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.425934 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/678e07d8-2ac8-4504-8834-685bc3a4ecfd-logs\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.426055 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.426143 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-scripts\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.426232 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/678e07d8-2ac8-4504-8834-685bc3a4ecfd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.426329 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.426603 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/678e07d8-2ac8-4504-8834-685bc3a4ecfd-logs\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.428751 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-config-data-custom\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.429448 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/678e07d8-2ac8-4504-8834-685bc3a4ecfd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.432046 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.432660 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.433671 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-config-data\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.434111 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-scripts\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.449217 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hxv4\" (UniqueName: \"kubernetes.io/projected/678e07d8-2ac8-4504-8834-685bc3a4ecfd-kube-api-access-9hxv4\") pod \"cinder-api-0\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.464597 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.508184 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:23 crc kubenswrapper[4813]: I0217 09:09:23.994109 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 17 09:09:24 crc kubenswrapper[4813]: I0217 09:09:24.084579 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 17 09:09:24 crc kubenswrapper[4813]: I0217 09:09:24.099201 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 17 09:09:24 crc kubenswrapper[4813]: I0217 09:09:24.477495 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:24 crc kubenswrapper[4813]: I0217 09:09:24.755572 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"678e07d8-2ac8-4504-8834-685bc3a4ecfd","Type":"ContainerStarted","Data":"603c06b6b98e4acd098dd5dd2063edd4d2ba069856afdd0ac2826bd1163daf17"} Feb 17 09:09:24 crc kubenswrapper[4813]: I0217 09:09:24.755948 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"678e07d8-2ac8-4504-8834-685bc3a4ecfd","Type":"ContainerStarted","Data":"260e001ddea3004d5b6d26f60714b2e9ec2162300717ce58e469d289d4eb6b17"} Feb 17 09:09:24 crc kubenswrapper[4813]: I0217 09:09:24.756708 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"5770a2b6-7618-4e7c-b57b-92ed0540fea7","Type":"ContainerStarted","Data":"de6a1a43db981b89da12282e23e08aeaa953290a37336c8b6682c339b487fc82"} Feb 17 09:09:24 crc kubenswrapper[4813]: I0217 09:09:24.757645 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"24bfaf69-146c-495b-bfe4-2c14543c64ba","Type":"ContainerStarted","Data":"e34e71a9dd9e817c55a82664b963ee7ba1afb199a936002205fefb1ff10807e3"} Feb 17 09:09:24 crc kubenswrapper[4813]: I0217 09:09:24.779092 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 17 09:09:25 crc kubenswrapper[4813]: I0217 09:09:25.094027 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-vpgcw"] Feb 17 09:09:25 crc kubenswrapper[4813]: I0217 09:09:25.109504 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-vpgcw"] Feb 17 09:09:25 crc kubenswrapper[4813]: I0217 09:09:25.121801 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a0f4474-1437-4124-8bbd-132f173fda58" path="/var/lib/kubelet/pods/4a0f4474-1437-4124-8bbd-132f173fda58/volumes" Feb 17 09:09:25 crc kubenswrapper[4813]: I0217 09:09:25.694752 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:25 crc kubenswrapper[4813]: I0217 09:09:25.775326 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"5770a2b6-7618-4e7c-b57b-92ed0540fea7","Type":"ContainerStarted","Data":"e30970253b955ebe3d11d66b9b9e2fcd37d7ef0c04fd9985c926c338de6bc654"} Feb 17 09:09:25 crc kubenswrapper[4813]: I0217 09:09:25.777063 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"678e07d8-2ac8-4504-8834-685bc3a4ecfd","Type":"ContainerStarted","Data":"3c496862fe6cfa62c49ec272dfac0ce13ec3203fba44633d9d140d97167cea19"} Feb 17 09:09:25 crc kubenswrapper[4813]: I0217 09:09:25.777188 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="678e07d8-2ac8-4504-8834-685bc3a4ecfd" containerName="cinder-api-log" containerID="cri-o://603c06b6b98e4acd098dd5dd2063edd4d2ba069856afdd0ac2826bd1163daf17" gracePeriod=30 Feb 17 09:09:25 crc kubenswrapper[4813]: I0217 09:09:25.777513 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:25 crc kubenswrapper[4813]: I0217 09:09:25.777734 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="678e07d8-2ac8-4504-8834-685bc3a4ecfd" containerName="cinder-api" containerID="cri-o://3c496862fe6cfa62c49ec272dfac0ce13ec3203fba44633d9d140d97167cea19" gracePeriod=30 Feb 17 09:09:25 crc kubenswrapper[4813]: I0217 09:09:25.801891 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-api-0" podStartSLOduration=2.801875947 podStartE2EDuration="2.801875947s" podCreationTimestamp="2026-02-17 09:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:09:25.798892802 +0000 UTC m=+1713.459654025" watchObservedRunningTime="2026-02-17 09:09:25.801875947 +0000 UTC m=+1713.462637170" Feb 17 09:09:26 crc kubenswrapper[4813]: I0217 09:09:26.787972 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"24bfaf69-146c-495b-bfe4-2c14543c64ba","Type":"ContainerStarted","Data":"f48da2ddd82e7b80e8c9c9f1833df34d098f50306dc5b626fbd9099f2400b1c6"} Feb 17 09:09:26 crc kubenswrapper[4813]: I0217 09:09:26.788471 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"24bfaf69-146c-495b-bfe4-2c14543c64ba","Type":"ContainerStarted","Data":"b15337b9b4fbf2bc94ff23018e9bad3539c36661d193c6d578e5b0fad89262ca"} Feb 17 09:09:26 crc kubenswrapper[4813]: I0217 09:09:26.796040 4813 generic.go:334] "Generic (PLEG): container finished" podID="678e07d8-2ac8-4504-8834-685bc3a4ecfd" containerID="603c06b6b98e4acd098dd5dd2063edd4d2ba069856afdd0ac2826bd1163daf17" exitCode=143 Feb 17 09:09:26 crc kubenswrapper[4813]: I0217 09:09:26.796105 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"678e07d8-2ac8-4504-8834-685bc3a4ecfd","Type":"ContainerDied","Data":"603c06b6b98e4acd098dd5dd2063edd4d2ba069856afdd0ac2826bd1163daf17"} Feb 17 09:09:26 crc kubenswrapper[4813]: I0217 09:09:26.798334 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"5770a2b6-7618-4e7c-b57b-92ed0540fea7","Type":"ContainerStarted","Data":"af0a0c08b4b3368bbbf3ece7c942ef142d3c40ddd2b281bd1e7eb1418da1cd80"} Feb 17 09:09:26 crc kubenswrapper[4813]: I0217 09:09:26.817927 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-backup-0" podStartSLOduration=2.4044027740000002 podStartE2EDuration="3.81791146s" podCreationTimestamp="2026-02-17 09:09:23 +0000 UTC" firstStartedPulling="2026-02-17 09:09:24.119976229 +0000 UTC m=+1711.780737442" lastFinishedPulling="2026-02-17 09:09:25.533484905 +0000 UTC m=+1713.194246128" observedRunningTime="2026-02-17 09:09:26.815702748 +0000 UTC m=+1714.476463971" watchObservedRunningTime="2026-02-17 09:09:26.81791146 +0000 UTC m=+1714.478672683" Feb 17 09:09:26 crc kubenswrapper[4813]: I0217 09:09:26.841589 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-scheduler-0" podStartSLOduration=3.145330413 podStartE2EDuration="3.841575083s" podCreationTimestamp="2026-02-17 09:09:23 +0000 UTC" firstStartedPulling="2026-02-17 09:09:24.013880112 +0000 UTC m=+1711.674641335" lastFinishedPulling="2026-02-17 09:09:24.710124772 +0000 UTC m=+1712.370886005" observedRunningTime="2026-02-17 09:09:26.835976754 +0000 UTC m=+1714.496737977" watchObservedRunningTime="2026-02-17 09:09:26.841575083 +0000 UTC m=+1714.502336306" Feb 17 09:09:26 crc kubenswrapper[4813]: I0217 09:09:26.881840 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:28 crc kubenswrapper[4813]: I0217 09:09:28.081109 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:28 crc kubenswrapper[4813]: I0217 09:09:28.393604 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:28 crc kubenswrapper[4813]: I0217 09:09:28.466633 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:29 crc kubenswrapper[4813]: I0217 09:09:29.331041 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:30 crc kubenswrapper[4813]: I0217 09:09:30.579102 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:31 crc kubenswrapper[4813]: I0217 09:09:31.773702 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:32 crc kubenswrapper[4813]: I0217 09:09:32.112348 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:09:32 crc kubenswrapper[4813]: E0217 09:09:32.112825 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:09:32 crc kubenswrapper[4813]: I0217 09:09:32.999777 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:33 crc kubenswrapper[4813]: I0217 09:09:33.612973 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:33 crc kubenswrapper[4813]: I0217 09:09:33.686745 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 17 09:09:33 crc kubenswrapper[4813]: I0217 09:09:33.704377 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:33 crc kubenswrapper[4813]: I0217 09:09:33.763892 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 17 09:09:33 crc kubenswrapper[4813]: I0217 09:09:33.860113 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="5770a2b6-7618-4e7c-b57b-92ed0540fea7" containerName="cinder-scheduler" containerID="cri-o://e30970253b955ebe3d11d66b9b9e2fcd37d7ef0c04fd9985c926c338de6bc654" gracePeriod=30 Feb 17 09:09:33 crc kubenswrapper[4813]: I0217 09:09:33.860177 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="5770a2b6-7618-4e7c-b57b-92ed0540fea7" containerName="probe" containerID="cri-o://af0a0c08b4b3368bbbf3ece7c942ef142d3c40ddd2b281bd1e7eb1418da1cd80" gracePeriod=30 Feb 17 09:09:33 crc kubenswrapper[4813]: I0217 09:09:33.860292 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="24bfaf69-146c-495b-bfe4-2c14543c64ba" containerName="cinder-backup" containerID="cri-o://b15337b9b4fbf2bc94ff23018e9bad3539c36661d193c6d578e5b0fad89262ca" gracePeriod=30 Feb 17 09:09:33 crc kubenswrapper[4813]: I0217 09:09:33.860357 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="24bfaf69-146c-495b-bfe4-2c14543c64ba" containerName="probe" containerID="cri-o://f48da2ddd82e7b80e8c9c9f1833df34d098f50306dc5b626fbd9099f2400b1c6" gracePeriod=30 Feb 17 09:09:34 crc kubenswrapper[4813]: I0217 09:09:34.155184 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:34 crc kubenswrapper[4813]: I0217 09:09:34.875195 4813 generic.go:334] "Generic (PLEG): container finished" podID="5770a2b6-7618-4e7c-b57b-92ed0540fea7" containerID="af0a0c08b4b3368bbbf3ece7c942ef142d3c40ddd2b281bd1e7eb1418da1cd80" exitCode=0 Feb 17 09:09:34 crc kubenswrapper[4813]: I0217 09:09:34.875235 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"5770a2b6-7618-4e7c-b57b-92ed0540fea7","Type":"ContainerDied","Data":"af0a0c08b4b3368bbbf3ece7c942ef142d3c40ddd2b281bd1e7eb1418da1cd80"} Feb 17 09:09:35 crc kubenswrapper[4813]: I0217 09:09:35.278586 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:09:35 crc kubenswrapper[4813]: I0217 09:09:35.279083 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="52f6f3c3-f62e-4cb7-9c9d-2b518d688eca" containerName="watcher-decision-engine" containerID="cri-o://141f4df8998b8596000ac8fce9b20c10c8435beaa3667c18f7b87730d6608808" gracePeriod=30 Feb 17 09:09:35 crc kubenswrapper[4813]: I0217 09:09:35.331651 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:35 crc kubenswrapper[4813]: I0217 09:09:35.508984 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:35 crc kubenswrapper[4813]: I0217 09:09:35.884759 4813 generic.go:334] "Generic (PLEG): container finished" podID="24bfaf69-146c-495b-bfe4-2c14543c64ba" containerID="f48da2ddd82e7b80e8c9c9f1833df34d098f50306dc5b626fbd9099f2400b1c6" exitCode=0 Feb 17 09:09:35 crc kubenswrapper[4813]: I0217 09:09:35.884806 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"24bfaf69-146c-495b-bfe4-2c14543c64ba","Type":"ContainerDied","Data":"f48da2ddd82e7b80e8c9c9f1833df34d098f50306dc5b626fbd9099f2400b1c6"} Feb 17 09:09:36 crc kubenswrapper[4813]: I0217 09:09:36.227270 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:09:36 crc kubenswrapper[4813]: I0217 09:09:36.227568 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerName="ceilometer-central-agent" containerID="cri-o://5cb1371a8d9faf56792ef8ec18176a5de322f32c8f934e3a28a693578fdeb254" gracePeriod=30 Feb 17 09:09:36 crc kubenswrapper[4813]: I0217 09:09:36.227995 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerName="proxy-httpd" containerID="cri-o://e2031d8e184e8725ec83fe251134da09a426ff1e41b7d4a3cba46887b033d449" gracePeriod=30 Feb 17 09:09:36 crc kubenswrapper[4813]: I0217 09:09:36.228063 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerName="sg-core" containerID="cri-o://50801ee879b22a4fd35dd70c92f2e40349cb919273da2360d1934d3d01884970" gracePeriod=30 Feb 17 09:09:36 crc kubenswrapper[4813]: I0217 09:09:36.228106 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerName="ceilometer-notification-agent" containerID="cri-o://6997bd25bf92978c8c81c7b5541e4c9cba576bc3b487003c68bf8c8dc106fef1" gracePeriod=30 Feb 17 09:09:36 crc kubenswrapper[4813]: I0217 09:09:36.556669 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:36 crc kubenswrapper[4813]: I0217 09:09:36.895639 4813 generic.go:334] "Generic (PLEG): container finished" podID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerID="e2031d8e184e8725ec83fe251134da09a426ff1e41b7d4a3cba46887b033d449" exitCode=0 Feb 17 09:09:36 crc kubenswrapper[4813]: I0217 09:09:36.895895 4813 generic.go:334] "Generic (PLEG): container finished" podID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerID="50801ee879b22a4fd35dd70c92f2e40349cb919273da2360d1934d3d01884970" exitCode=2 Feb 17 09:09:36 crc kubenswrapper[4813]: I0217 09:09:36.895907 4813 generic.go:334] "Generic (PLEG): container finished" podID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerID="5cb1371a8d9faf56792ef8ec18176a5de322f32c8f934e3a28a693578fdeb254" exitCode=0 Feb 17 09:09:36 crc kubenswrapper[4813]: I0217 09:09:36.895721 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"72cc6938-e3fd-4b74-ab51-81f30f7d7576","Type":"ContainerDied","Data":"e2031d8e184e8725ec83fe251134da09a426ff1e41b7d4a3cba46887b033d449"} Feb 17 09:09:36 crc kubenswrapper[4813]: I0217 09:09:36.895948 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"72cc6938-e3fd-4b74-ab51-81f30f7d7576","Type":"ContainerDied","Data":"50801ee879b22a4fd35dd70c92f2e40349cb919273da2360d1934d3d01884970"} Feb 17 09:09:36 crc kubenswrapper[4813]: I0217 09:09:36.895965 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"72cc6938-e3fd-4b74-ab51-81f30f7d7576","Type":"ContainerDied","Data":"5cb1371a8d9faf56792ef8ec18176a5de322f32c8f934e3a28a693578fdeb254"} Feb 17 09:09:37 crc kubenswrapper[4813]: I0217 09:09:37.757174 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.836689 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.907745 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924481 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-machine-id\") pod \"24bfaf69-146c-495b-bfe4-2c14543c64ba\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924522 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-config-data-custom\") pod \"24bfaf69-146c-495b-bfe4-2c14543c64ba\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924573 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-lib-cinder\") pod \"24bfaf69-146c-495b-bfe4-2c14543c64ba\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924570 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "24bfaf69-146c-495b-bfe4-2c14543c64ba" (UID: "24bfaf69-146c-495b-bfe4-2c14543c64ba"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924604 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-lib-modules\") pod \"24bfaf69-146c-495b-bfe4-2c14543c64ba\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924622 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-run\") pod \"24bfaf69-146c-495b-bfe4-2c14543c64ba\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924631 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "24bfaf69-146c-495b-bfe4-2c14543c64ba" (UID: "24bfaf69-146c-495b-bfe4-2c14543c64ba"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924639 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-dev\") pod \"24bfaf69-146c-495b-bfe4-2c14543c64ba\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924655 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-iscsi\") pod \"24bfaf69-146c-495b-bfe4-2c14543c64ba\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924672 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "24bfaf69-146c-495b-bfe4-2c14543c64ba" (UID: "24bfaf69-146c-495b-bfe4-2c14543c64ba"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924687 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-locks-brick\") pod \"24bfaf69-146c-495b-bfe4-2c14543c64ba\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924697 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-run" (OuterVolumeSpecName: "run") pod "24bfaf69-146c-495b-bfe4-2c14543c64ba" (UID: "24bfaf69-146c-495b-bfe4-2c14543c64ba"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924707 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-cert-memcached-mtls\") pod \"24bfaf69-146c-495b-bfe4-2c14543c64ba\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924718 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "24bfaf69-146c-495b-bfe4-2c14543c64ba" (UID: "24bfaf69-146c-495b-bfe4-2c14543c64ba"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924741 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-dev" (OuterVolumeSpecName: "dev") pod "24bfaf69-146c-495b-bfe4-2c14543c64ba" (UID: "24bfaf69-146c-495b-bfe4-2c14543c64ba"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924742 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-sys\") pod \"24bfaf69-146c-495b-bfe4-2c14543c64ba\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924766 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-sys" (OuterVolumeSpecName: "sys") pod "24bfaf69-146c-495b-bfe4-2c14543c64ba" (UID: "24bfaf69-146c-495b-bfe4-2c14543c64ba"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924790 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "24bfaf69-146c-495b-bfe4-2c14543c64ba" (UID: "24bfaf69-146c-495b-bfe4-2c14543c64ba"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924798 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-locks-cinder\") pod \"24bfaf69-146c-495b-bfe4-2c14543c64ba\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924868 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-nvme\") pod \"24bfaf69-146c-495b-bfe4-2c14543c64ba\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924915 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgtzw\" (UniqueName: \"kubernetes.io/projected/24bfaf69-146c-495b-bfe4-2c14543c64ba-kube-api-access-kgtzw\") pod \"24bfaf69-146c-495b-bfe4-2c14543c64ba\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924945 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-combined-ca-bundle\") pod \"24bfaf69-146c-495b-bfe4-2c14543c64ba\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.924969 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-scripts\") pod \"24bfaf69-146c-495b-bfe4-2c14543c64ba\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.925015 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-config-data\") pod \"24bfaf69-146c-495b-bfe4-2c14543c64ba\" (UID: \"24bfaf69-146c-495b-bfe4-2c14543c64ba\") " Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.925396 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.925420 4813 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.925433 4813 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-lib-modules\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.925447 4813 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-run\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.925458 4813 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-dev\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.925468 4813 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-iscsi\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.925478 4813 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-locks-brick\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.925489 4813 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-sys\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.925747 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "24bfaf69-146c-495b-bfe4-2c14543c64ba" (UID: "24bfaf69-146c-495b-bfe4-2c14543c64ba"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.926135 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "24bfaf69-146c-495b-bfe4-2c14543c64ba" (UID: "24bfaf69-146c-495b-bfe4-2c14543c64ba"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.927398 4813 generic.go:334] "Generic (PLEG): container finished" podID="5770a2b6-7618-4e7c-b57b-92ed0540fea7" containerID="e30970253b955ebe3d11d66b9b9e2fcd37d7ef0c04fd9985c926c338de6bc654" exitCode=0 Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.927485 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"5770a2b6-7618-4e7c-b57b-92ed0540fea7","Type":"ContainerDied","Data":"e30970253b955ebe3d11d66b9b9e2fcd37d7ef0c04fd9985c926c338de6bc654"} Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.927523 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"5770a2b6-7618-4e7c-b57b-92ed0540fea7","Type":"ContainerDied","Data":"de6a1a43db981b89da12282e23e08aeaa953290a37336c8b6682c339b487fc82"} Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.927534 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de6a1a43db981b89da12282e23e08aeaa953290a37336c8b6682c339b487fc82" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.929840 4813 generic.go:334] "Generic (PLEG): container finished" podID="24bfaf69-146c-495b-bfe4-2c14543c64ba" containerID="b15337b9b4fbf2bc94ff23018e9bad3539c36661d193c6d578e5b0fad89262ca" exitCode=0 Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.929887 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"24bfaf69-146c-495b-bfe4-2c14543c64ba","Type":"ContainerDied","Data":"b15337b9b4fbf2bc94ff23018e9bad3539c36661d193c6d578e5b0fad89262ca"} Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.929920 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"24bfaf69-146c-495b-bfe4-2c14543c64ba","Type":"ContainerDied","Data":"e34e71a9dd9e817c55a82664b963ee7ba1afb199a936002205fefb1ff10807e3"} Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.929942 4813 scope.go:117] "RemoveContainer" containerID="f48da2ddd82e7b80e8c9c9f1833df34d098f50306dc5b626fbd9099f2400b1c6" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.930090 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.931193 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-scripts" (OuterVolumeSpecName: "scripts") pod "24bfaf69-146c-495b-bfe4-2c14543c64ba" (UID: "24bfaf69-146c-495b-bfe4-2c14543c64ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.931289 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "24bfaf69-146c-495b-bfe4-2c14543c64ba" (UID: "24bfaf69-146c-495b-bfe4-2c14543c64ba"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.933822 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24bfaf69-146c-495b-bfe4-2c14543c64ba-kube-api-access-kgtzw" (OuterVolumeSpecName: "kube-api-access-kgtzw") pod "24bfaf69-146c-495b-bfe4-2c14543c64ba" (UID: "24bfaf69-146c-495b-bfe4-2c14543c64ba"). InnerVolumeSpecName "kube-api-access-kgtzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:09:38 crc kubenswrapper[4813]: I0217 09:09:38.999834 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24bfaf69-146c-495b-bfe4-2c14543c64ba" (UID: "24bfaf69-146c-495b-bfe4-2c14543c64ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.032386 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-config-data" (OuterVolumeSpecName: "config-data") pod "24bfaf69-146c-495b-bfe4-2c14543c64ba" (UID: "24bfaf69-146c-495b-bfe4-2c14543c64ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.032851 4813 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.032877 4813 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/24bfaf69-146c-495b-bfe4-2c14543c64ba-etc-nvme\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.032887 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgtzw\" (UniqueName: \"kubernetes.io/projected/24bfaf69-146c-495b-bfe4-2c14543c64ba-kube-api-access-kgtzw\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.032897 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.032906 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.032915 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.032923 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.050708 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "24bfaf69-146c-495b-bfe4-2c14543c64ba" (UID: "24bfaf69-146c-495b-bfe4-2c14543c64ba"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.072709 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.084857 4813 scope.go:117] "RemoveContainer" containerID="b15337b9b4fbf2bc94ff23018e9bad3539c36661d193c6d578e5b0fad89262ca" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.115317 4813 scope.go:117] "RemoveContainer" containerID="f48da2ddd82e7b80e8c9c9f1833df34d098f50306dc5b626fbd9099f2400b1c6" Feb 17 09:09:39 crc kubenswrapper[4813]: E0217 09:09:39.117752 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f48da2ddd82e7b80e8c9c9f1833df34d098f50306dc5b626fbd9099f2400b1c6\": container with ID starting with f48da2ddd82e7b80e8c9c9f1833df34d098f50306dc5b626fbd9099f2400b1c6 not found: ID does not exist" containerID="f48da2ddd82e7b80e8c9c9f1833df34d098f50306dc5b626fbd9099f2400b1c6" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.117801 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f48da2ddd82e7b80e8c9c9f1833df34d098f50306dc5b626fbd9099f2400b1c6"} err="failed to get container status \"f48da2ddd82e7b80e8c9c9f1833df34d098f50306dc5b626fbd9099f2400b1c6\": rpc error: code = NotFound desc = could not find container \"f48da2ddd82e7b80e8c9c9f1833df34d098f50306dc5b626fbd9099f2400b1c6\": container with ID starting with f48da2ddd82e7b80e8c9c9f1833df34d098f50306dc5b626fbd9099f2400b1c6 not found: ID does not exist" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.117825 4813 scope.go:117] "RemoveContainer" containerID="b15337b9b4fbf2bc94ff23018e9bad3539c36661d193c6d578e5b0fad89262ca" Feb 17 09:09:39 crc kubenswrapper[4813]: E0217 09:09:39.118154 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b15337b9b4fbf2bc94ff23018e9bad3539c36661d193c6d578e5b0fad89262ca\": container with ID starting with b15337b9b4fbf2bc94ff23018e9bad3539c36661d193c6d578e5b0fad89262ca not found: ID does not exist" containerID="b15337b9b4fbf2bc94ff23018e9bad3539c36661d193c6d578e5b0fad89262ca" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.118195 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b15337b9b4fbf2bc94ff23018e9bad3539c36661d193c6d578e5b0fad89262ca"} err="failed to get container status \"b15337b9b4fbf2bc94ff23018e9bad3539c36661d193c6d578e5b0fad89262ca\": rpc error: code = NotFound desc = could not find container \"b15337b9b4fbf2bc94ff23018e9bad3539c36661d193c6d578e5b0fad89262ca\": container with ID starting with b15337b9b4fbf2bc94ff23018e9bad3539c36661d193c6d578e5b0fad89262ca not found: ID does not exist" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.136656 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/24bfaf69-146c-495b-bfe4-2c14543c64ba-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.237166 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5770a2b6-7618-4e7c-b57b-92ed0540fea7-etc-machine-id\") pod \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.237276 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-combined-ca-bundle\") pod \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.237327 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l2gs\" (UniqueName: \"kubernetes.io/projected/5770a2b6-7618-4e7c-b57b-92ed0540fea7-kube-api-access-6l2gs\") pod \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.237362 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-config-data\") pod \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.237400 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-config-data-custom\") pod \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.237460 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-scripts\") pod \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.237489 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-cert-memcached-mtls\") pod \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\" (UID: \"5770a2b6-7618-4e7c-b57b-92ed0540fea7\") " Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.237999 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5770a2b6-7618-4e7c-b57b-92ed0540fea7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5770a2b6-7618-4e7c-b57b-92ed0540fea7" (UID: "5770a2b6-7618-4e7c-b57b-92ed0540fea7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.248004 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-scripts" (OuterVolumeSpecName: "scripts") pod "5770a2b6-7618-4e7c-b57b-92ed0540fea7" (UID: "5770a2b6-7618-4e7c-b57b-92ed0540fea7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.254156 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5770a2b6-7618-4e7c-b57b-92ed0540fea7-kube-api-access-6l2gs" (OuterVolumeSpecName: "kube-api-access-6l2gs") pod "5770a2b6-7618-4e7c-b57b-92ed0540fea7" (UID: "5770a2b6-7618-4e7c-b57b-92ed0540fea7"). InnerVolumeSpecName "kube-api-access-6l2gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.254911 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5770a2b6-7618-4e7c-b57b-92ed0540fea7" (UID: "5770a2b6-7618-4e7c-b57b-92ed0540fea7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.267716 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.278666 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.286157 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 17 09:09:39 crc kubenswrapper[4813]: E0217 09:09:39.286729 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5770a2b6-7618-4e7c-b57b-92ed0540fea7" containerName="probe" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.286741 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5770a2b6-7618-4e7c-b57b-92ed0540fea7" containerName="probe" Feb 17 09:09:39 crc kubenswrapper[4813]: E0217 09:09:39.286753 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24bfaf69-146c-495b-bfe4-2c14543c64ba" containerName="cinder-backup" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.286759 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="24bfaf69-146c-495b-bfe4-2c14543c64ba" containerName="cinder-backup" Feb 17 09:09:39 crc kubenswrapper[4813]: E0217 09:09:39.286775 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24bfaf69-146c-495b-bfe4-2c14543c64ba" containerName="probe" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.286783 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="24bfaf69-146c-495b-bfe4-2c14543c64ba" containerName="probe" Feb 17 09:09:39 crc kubenswrapper[4813]: E0217 09:09:39.286807 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5770a2b6-7618-4e7c-b57b-92ed0540fea7" containerName="cinder-scheduler" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.286812 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5770a2b6-7618-4e7c-b57b-92ed0540fea7" containerName="cinder-scheduler" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.286951 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5770a2b6-7618-4e7c-b57b-92ed0540fea7" containerName="probe" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.286967 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="24bfaf69-146c-495b-bfe4-2c14543c64ba" containerName="cinder-backup" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.286981 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="24bfaf69-146c-495b-bfe4-2c14543c64ba" containerName="probe" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.286991 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5770a2b6-7618-4e7c-b57b-92ed0540fea7" containerName="cinder-scheduler" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.293726 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.296487 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-backup-config-data" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.321270 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.343407 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5770a2b6-7618-4e7c-b57b-92ed0540fea7" (UID: "5770a2b6-7618-4e7c-b57b-92ed0540fea7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.343862 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.343887 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l2gs\" (UniqueName: \"kubernetes.io/projected/5770a2b6-7618-4e7c-b57b-92ed0540fea7-kube-api-access-6l2gs\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.343896 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.343906 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.343929 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5770a2b6-7618-4e7c-b57b-92ed0540fea7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.393969 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-config-data" (OuterVolumeSpecName: "config-data") pod "5770a2b6-7618-4e7c-b57b-92ed0540fea7" (UID: "5770a2b6-7618-4e7c-b57b-92ed0540fea7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.422607 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "5770a2b6-7618-4e7c-b57b-92ed0540fea7" (UID: "5770a2b6-7618-4e7c-b57b-92ed0540fea7"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.445653 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.445953 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.446007 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.446206 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-sys\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.446269 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.446321 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-lib-modules\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.447510 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.447606 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-run\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.447724 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-dev\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.447848 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-scripts\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.447961 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.448058 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp8t2\" (UniqueName: \"kubernetes.io/projected/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-kube-api-access-xp8t2\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.448149 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.448235 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.448327 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-config-data\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.448429 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.448567 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.448629 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5770a2b6-7618-4e7c-b57b-92ed0540fea7-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.549648 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.549693 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-lib-modules\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.549738 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.549755 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-run\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.549775 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-dev\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.549807 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-scripts\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.549831 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.549848 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp8t2\" (UniqueName: \"kubernetes.io/projected/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-kube-api-access-xp8t2\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.549866 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.549885 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.549903 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-config-data\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.549916 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-run\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.549964 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-dev\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.549922 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.549933 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.550051 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.550176 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.550210 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.550241 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.550369 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-sys\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.550415 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.550010 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.550538 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-sys\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.550576 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.550800 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-lib-modules\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.551442 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.555235 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.555481 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-config-data\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.558935 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.558979 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-scripts\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.560056 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.571718 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp8t2\" (UniqueName: \"kubernetes.io/projected/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-kube-api-access-xp8t2\") pod \"cinder-backup-0\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.625076 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.678346 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.754475 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72cc6938-e3fd-4b74-ab51-81f30f7d7576-log-httpd\") pod \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.754834 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-config-data\") pod \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.754876 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-ceilometer-tls-certs\") pod \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.754918 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72cc6938-e3fd-4b74-ab51-81f30f7d7576-run-httpd\") pod \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.755079 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-combined-ca-bundle\") pod \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.755158 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmw77\" (UniqueName: \"kubernetes.io/projected/72cc6938-e3fd-4b74-ab51-81f30f7d7576-kube-api-access-dmw77\") pod \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.755298 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-sg-core-conf-yaml\") pod \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.755386 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-scripts\") pod \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\" (UID: \"72cc6938-e3fd-4b74-ab51-81f30f7d7576\") " Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.756675 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72cc6938-e3fd-4b74-ab51-81f30f7d7576-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "72cc6938-e3fd-4b74-ab51-81f30f7d7576" (UID: "72cc6938-e3fd-4b74-ab51-81f30f7d7576"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.757137 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72cc6938-e3fd-4b74-ab51-81f30f7d7576-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "72cc6938-e3fd-4b74-ab51-81f30f7d7576" (UID: "72cc6938-e3fd-4b74-ab51-81f30f7d7576"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.766398 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72cc6938-e3fd-4b74-ab51-81f30f7d7576-kube-api-access-dmw77" (OuterVolumeSpecName: "kube-api-access-dmw77") pod "72cc6938-e3fd-4b74-ab51-81f30f7d7576" (UID: "72cc6938-e3fd-4b74-ab51-81f30f7d7576"). InnerVolumeSpecName "kube-api-access-dmw77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.766753 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-scripts" (OuterVolumeSpecName: "scripts") pod "72cc6938-e3fd-4b74-ab51-81f30f7d7576" (UID: "72cc6938-e3fd-4b74-ab51-81f30f7d7576"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.813435 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "72cc6938-e3fd-4b74-ab51-81f30f7d7576" (UID: "72cc6938-e3fd-4b74-ab51-81f30f7d7576"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.833246 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "72cc6938-e3fd-4b74-ab51-81f30f7d7576" (UID: "72cc6938-e3fd-4b74-ab51-81f30f7d7576"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.859027 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.859140 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.859181 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72cc6938-e3fd-4b74-ab51-81f30f7d7576-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.859195 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.859205 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72cc6938-e3fd-4b74-ab51-81f30f7d7576-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.859238 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmw77\" (UniqueName: \"kubernetes.io/projected/72cc6938-e3fd-4b74-ab51-81f30f7d7576-kube-api-access-dmw77\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.881941 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72cc6938-e3fd-4b74-ab51-81f30f7d7576" (UID: "72cc6938-e3fd-4b74-ab51-81f30f7d7576"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.910667 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-config-data" (OuterVolumeSpecName: "config-data") pod "72cc6938-e3fd-4b74-ab51-81f30f7d7576" (UID: "72cc6938-e3fd-4b74-ab51-81f30f7d7576"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.946424 4813 generic.go:334] "Generic (PLEG): container finished" podID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerID="6997bd25bf92978c8c81c7b5541e4c9cba576bc3b487003c68bf8c8dc106fef1" exitCode=0 Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.946495 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"72cc6938-e3fd-4b74-ab51-81f30f7d7576","Type":"ContainerDied","Data":"6997bd25bf92978c8c81c7b5541e4c9cba576bc3b487003c68bf8c8dc106fef1"} Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.946519 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"72cc6938-e3fd-4b74-ab51-81f30f7d7576","Type":"ContainerDied","Data":"ae7753c31831f1b6a0f952773a0c7b06a2cd8e37d6f459cebcab7102b3bcce83"} Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.946552 4813 scope.go:117] "RemoveContainer" containerID="e2031d8e184e8725ec83fe251134da09a426ff1e41b7d4a3cba46887b033d449" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.946707 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.949989 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.960678 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:39 crc kubenswrapper[4813]: I0217 09:09:39.960714 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cc6938-e3fd-4b74-ab51-81f30f7d7576-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.108994 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/watcher-decision-engine/0.log" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.116834 4813 scope.go:117] "RemoveContainer" containerID="50801ee879b22a4fd35dd70c92f2e40349cb919273da2360d1934d3d01884970" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.144405 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.144672 4813 scope.go:117] "RemoveContainer" containerID="6997bd25bf92978c8c81c7b5541e4c9cba576bc3b487003c68bf8c8dc106fef1" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.169453 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.194716 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.214391 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.221404 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.228080 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:09:40 crc kubenswrapper[4813]: E0217 09:09:40.228470 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerName="ceilometer-central-agent" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.228485 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerName="ceilometer-central-agent" Feb 17 09:09:40 crc kubenswrapper[4813]: E0217 09:09:40.228493 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerName="proxy-httpd" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.228499 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerName="proxy-httpd" Feb 17 09:09:40 crc kubenswrapper[4813]: E0217 09:09:40.228516 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerName="sg-core" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.228521 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerName="sg-core" Feb 17 09:09:40 crc kubenswrapper[4813]: E0217 09:09:40.228533 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerName="ceilometer-notification-agent" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.228539 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerName="ceilometer-notification-agent" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.228674 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerName="ceilometer-central-agent" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.228692 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerName="sg-core" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.228704 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerName="ceilometer-notification-agent" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.228713 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" containerName="proxy-httpd" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.230094 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.232738 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.232789 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.232914 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.235036 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.236402 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.237568 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scheduler-config-data" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.241700 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.251229 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.294362 4813 scope.go:117] "RemoveContainer" containerID="5cb1371a8d9faf56792ef8ec18176a5de322f32c8f934e3a28a693578fdeb254" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.332604 4813 scope.go:117] "RemoveContainer" containerID="e2031d8e184e8725ec83fe251134da09a426ff1e41b7d4a3cba46887b033d449" Feb 17 09:09:40 crc kubenswrapper[4813]: E0217 09:09:40.333069 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2031d8e184e8725ec83fe251134da09a426ff1e41b7d4a3cba46887b033d449\": container with ID starting with e2031d8e184e8725ec83fe251134da09a426ff1e41b7d4a3cba46887b033d449 not found: ID does not exist" containerID="e2031d8e184e8725ec83fe251134da09a426ff1e41b7d4a3cba46887b033d449" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.333099 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2031d8e184e8725ec83fe251134da09a426ff1e41b7d4a3cba46887b033d449"} err="failed to get container status \"e2031d8e184e8725ec83fe251134da09a426ff1e41b7d4a3cba46887b033d449\": rpc error: code = NotFound desc = could not find container \"e2031d8e184e8725ec83fe251134da09a426ff1e41b7d4a3cba46887b033d449\": container with ID starting with e2031d8e184e8725ec83fe251134da09a426ff1e41b7d4a3cba46887b033d449 not found: ID does not exist" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.333143 4813 scope.go:117] "RemoveContainer" containerID="50801ee879b22a4fd35dd70c92f2e40349cb919273da2360d1934d3d01884970" Feb 17 09:09:40 crc kubenswrapper[4813]: E0217 09:09:40.333567 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50801ee879b22a4fd35dd70c92f2e40349cb919273da2360d1934d3d01884970\": container with ID starting with 50801ee879b22a4fd35dd70c92f2e40349cb919273da2360d1934d3d01884970 not found: ID does not exist" containerID="50801ee879b22a4fd35dd70c92f2e40349cb919273da2360d1934d3d01884970" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.333613 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50801ee879b22a4fd35dd70c92f2e40349cb919273da2360d1934d3d01884970"} err="failed to get container status \"50801ee879b22a4fd35dd70c92f2e40349cb919273da2360d1934d3d01884970\": rpc error: code = NotFound desc = could not find container \"50801ee879b22a4fd35dd70c92f2e40349cb919273da2360d1934d3d01884970\": container with ID starting with 50801ee879b22a4fd35dd70c92f2e40349cb919273da2360d1934d3d01884970 not found: ID does not exist" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.333633 4813 scope.go:117] "RemoveContainer" containerID="6997bd25bf92978c8c81c7b5541e4c9cba576bc3b487003c68bf8c8dc106fef1" Feb 17 09:09:40 crc kubenswrapper[4813]: E0217 09:09:40.335121 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6997bd25bf92978c8c81c7b5541e4c9cba576bc3b487003c68bf8c8dc106fef1\": container with ID starting with 6997bd25bf92978c8c81c7b5541e4c9cba576bc3b487003c68bf8c8dc106fef1 not found: ID does not exist" containerID="6997bd25bf92978c8c81c7b5541e4c9cba576bc3b487003c68bf8c8dc106fef1" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.335156 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6997bd25bf92978c8c81c7b5541e4c9cba576bc3b487003c68bf8c8dc106fef1"} err="failed to get container status \"6997bd25bf92978c8c81c7b5541e4c9cba576bc3b487003c68bf8c8dc106fef1\": rpc error: code = NotFound desc = could not find container \"6997bd25bf92978c8c81c7b5541e4c9cba576bc3b487003c68bf8c8dc106fef1\": container with ID starting with 6997bd25bf92978c8c81c7b5541e4c9cba576bc3b487003c68bf8c8dc106fef1 not found: ID does not exist" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.335174 4813 scope.go:117] "RemoveContainer" containerID="5cb1371a8d9faf56792ef8ec18176a5de322f32c8f934e3a28a693578fdeb254" Feb 17 09:09:40 crc kubenswrapper[4813]: E0217 09:09:40.335522 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cb1371a8d9faf56792ef8ec18176a5de322f32c8f934e3a28a693578fdeb254\": container with ID starting with 5cb1371a8d9faf56792ef8ec18176a5de322f32c8f934e3a28a693578fdeb254 not found: ID does not exist" containerID="5cb1371a8d9faf56792ef8ec18176a5de322f32c8f934e3a28a693578fdeb254" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.335550 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cb1371a8d9faf56792ef8ec18176a5de322f32c8f934e3a28a693578fdeb254"} err="failed to get container status \"5cb1371a8d9faf56792ef8ec18176a5de322f32c8f934e3a28a693578fdeb254\": rpc error: code = NotFound desc = could not find container \"5cb1371a8d9faf56792ef8ec18176a5de322f32c8f934e3a28a693578fdeb254\": container with ID starting with 5cb1371a8d9faf56792ef8ec18176a5de322f32c8f934e3a28a693578fdeb254 not found: ID does not exist" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.366930 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98ccn\" (UniqueName: \"kubernetes.io/projected/6a688c77-0e32-4f07-91f3-9a69d7b26e66-kube-api-access-98ccn\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.366996 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-scripts\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.367044 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-scripts\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.367158 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29z9c\" (UniqueName: \"kubernetes.io/projected/ed2dfee8-4b50-469d-9ee6-036af68b084e-kube-api-access-29z9c\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.367186 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-config-data\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.367206 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.367258 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.367287 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-config-data\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.367345 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.367369 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed2dfee8-4b50-469d-9ee6-036af68b084e-log-httpd\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.367413 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed2dfee8-4b50-469d-9ee6-036af68b084e-run-httpd\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.367468 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.367537 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.367608 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a688c77-0e32-4f07-91f3-9a69d7b26e66-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.367654 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.469496 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98ccn\" (UniqueName: \"kubernetes.io/projected/6a688c77-0e32-4f07-91f3-9a69d7b26e66-kube-api-access-98ccn\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.470017 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-scripts\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.470086 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-scripts\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.470147 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29z9c\" (UniqueName: \"kubernetes.io/projected/ed2dfee8-4b50-469d-9ee6-036af68b084e-kube-api-access-29z9c\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.470171 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-config-data\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.471082 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.471124 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.471157 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-config-data\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.471190 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.471218 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed2dfee8-4b50-469d-9ee6-036af68b084e-log-httpd\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.471250 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed2dfee8-4b50-469d-9ee6-036af68b084e-run-httpd\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.471369 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.471442 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.471512 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a688c77-0e32-4f07-91f3-9a69d7b26e66-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.471549 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.471844 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed2dfee8-4b50-469d-9ee6-036af68b084e-log-httpd\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.471951 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a688c77-0e32-4f07-91f3-9a69d7b26e66-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.472075 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed2dfee8-4b50-469d-9ee6-036af68b084e-run-httpd\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.476940 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.477037 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.477173 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-config-data\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.478227 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.478435 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-config-data\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.478607 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.478633 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-scripts\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.478968 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-scripts\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.479743 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.488206 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98ccn\" (UniqueName: \"kubernetes.io/projected/6a688c77-0e32-4f07-91f3-9a69d7b26e66-kube-api-access-98ccn\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.490006 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.490701 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29z9c\" (UniqueName: \"kubernetes.io/projected/ed2dfee8-4b50-469d-9ee6-036af68b084e-kube-api-access-29z9c\") pod \"ceilometer-0\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.493563 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.572722 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-config-data\") pod \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.572818 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-custom-prometheus-ca\") pod \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.572889 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-logs\") pod \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.572986 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-cert-memcached-mtls\") pod \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.573044 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-combined-ca-bundle\") pod \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.573074 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4csb8\" (UniqueName: \"kubernetes.io/projected/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-kube-api-access-4csb8\") pod \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\" (UID: \"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca\") " Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.574724 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-logs" (OuterVolumeSpecName: "logs") pod "52f6f3c3-f62e-4cb7-9c9d-2b518d688eca" (UID: "52f6f3c3-f62e-4cb7-9c9d-2b518d688eca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.577609 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-kube-api-access-4csb8" (OuterVolumeSpecName: "kube-api-access-4csb8") pod "52f6f3c3-f62e-4cb7-9c9d-2b518d688eca" (UID: "52f6f3c3-f62e-4cb7-9c9d-2b518d688eca"). InnerVolumeSpecName "kube-api-access-4csb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.582386 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.594632 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.600640 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52f6f3c3-f62e-4cb7-9c9d-2b518d688eca" (UID: "52f6f3c3-f62e-4cb7-9c9d-2b518d688eca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.607650 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "52f6f3c3-f62e-4cb7-9c9d-2b518d688eca" (UID: "52f6f3c3-f62e-4cb7-9c9d-2b518d688eca"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.644496 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-config-data" (OuterVolumeSpecName: "config-data") pod "52f6f3c3-f62e-4cb7-9c9d-2b518d688eca" (UID: "52f6f3c3-f62e-4cb7-9c9d-2b518d688eca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.660583 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "52f6f3c3-f62e-4cb7-9c9d-2b518d688eca" (UID: "52f6f3c3-f62e-4cb7-9c9d-2b518d688eca"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.680952 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4csb8\" (UniqueName: \"kubernetes.io/projected/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-kube-api-access-4csb8\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.680986 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.680996 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.681004 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.681019 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.681039 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.961078 4813 generic.go:334] "Generic (PLEG): container finished" podID="52f6f3c3-f62e-4cb7-9c9d-2b518d688eca" containerID="141f4df8998b8596000ac8fce9b20c10c8435beaa3667c18f7b87730d6608808" exitCode=0 Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.961136 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca","Type":"ContainerDied","Data":"141f4df8998b8596000ac8fce9b20c10c8435beaa3667c18f7b87730d6608808"} Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.961161 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"52f6f3c3-f62e-4cb7-9c9d-2b518d688eca","Type":"ContainerDied","Data":"43fb1c4cef43398f33019c17ef821c03f9dfe94faa9a6bd0260b8713d6a633d0"} Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.961176 4813 scope.go:117] "RemoveContainer" containerID="141f4df8998b8596000ac8fce9b20c10c8435beaa3667c18f7b87730d6608808" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.961255 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.981851 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53","Type":"ContainerStarted","Data":"f29d915a24400342adc62687846b6ce0ea878a68b6b1a6ba3598c97a55cad776"} Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.981895 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53","Type":"ContainerStarted","Data":"9e39e48ba9dcf0525fe5da982d484e149c97d8836743430591806fec13184f87"} Feb 17 09:09:40 crc kubenswrapper[4813]: I0217 09:09:40.981906 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53","Type":"ContainerStarted","Data":"6ba0ec04d37f27ef837a3bf0bd7ca5ba5387cdd53691a3b2f5ab28a1e5fbeb80"} Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.001299 4813 scope.go:117] "RemoveContainer" containerID="141f4df8998b8596000ac8fce9b20c10c8435beaa3667c18f7b87730d6608808" Feb 17 09:09:41 crc kubenswrapper[4813]: E0217 09:09:41.001701 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"141f4df8998b8596000ac8fce9b20c10c8435beaa3667c18f7b87730d6608808\": container with ID starting with 141f4df8998b8596000ac8fce9b20c10c8435beaa3667c18f7b87730d6608808 not found: ID does not exist" containerID="141f4df8998b8596000ac8fce9b20c10c8435beaa3667c18f7b87730d6608808" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.001727 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"141f4df8998b8596000ac8fce9b20c10c8435beaa3667c18f7b87730d6608808"} err="failed to get container status \"141f4df8998b8596000ac8fce9b20c10c8435beaa3667c18f7b87730d6608808\": rpc error: code = NotFound desc = could not find container \"141f4df8998b8596000ac8fce9b20c10c8435beaa3667c18f7b87730d6608808\": container with ID starting with 141f4df8998b8596000ac8fce9b20c10c8435beaa3667c18f7b87730d6608808 not found: ID does not exist" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.023008 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-backup-0" podStartSLOduration=2.02298617 podStartE2EDuration="2.02298617s" podCreationTimestamp="2026-02-17 09:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:09:41.011931406 +0000 UTC m=+1728.672692649" watchObservedRunningTime="2026-02-17 09:09:41.02298617 +0000 UTC m=+1728.683747393" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.037686 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.050403 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.060044 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:09:41 crc kubenswrapper[4813]: E0217 09:09:41.060457 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f6f3c3-f62e-4cb7-9c9d-2b518d688eca" containerName="watcher-decision-engine" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.060474 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f6f3c3-f62e-4cb7-9c9d-2b518d688eca" containerName="watcher-decision-engine" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.060638 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f6f3c3-f62e-4cb7-9c9d-2b518d688eca" containerName="watcher-decision-engine" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.061271 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.061321 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.064180 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.121830 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24bfaf69-146c-495b-bfe4-2c14543c64ba" path="/var/lib/kubelet/pods/24bfaf69-146c-495b-bfe4-2c14543c64ba/volumes" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.122698 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52f6f3c3-f62e-4cb7-9c9d-2b518d688eca" path="/var/lib/kubelet/pods/52f6f3c3-f62e-4cb7-9c9d-2b518d688eca/volumes" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.123358 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5770a2b6-7618-4e7c-b57b-92ed0540fea7" path="/var/lib/kubelet/pods/5770a2b6-7618-4e7c-b57b-92ed0540fea7/volumes" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.131240 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72cc6938-e3fd-4b74-ab51-81f30f7d7576" path="/var/lib/kubelet/pods/72cc6938-e3fd-4b74-ab51-81f30f7d7576/volumes" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.132283 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.187721 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.191882 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5cxw\" (UniqueName: \"kubernetes.io/projected/e289f880-3998-42a1-8ed2-1d0e1f356d36-kube-api-access-z5cxw\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.191921 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e289f880-3998-42a1-8ed2-1d0e1f356d36-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.191944 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.191973 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.192067 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.192090 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.293756 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5cxw\" (UniqueName: \"kubernetes.io/projected/e289f880-3998-42a1-8ed2-1d0e1f356d36-kube-api-access-z5cxw\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.294164 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e289f880-3998-42a1-8ed2-1d0e1f356d36-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.294196 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.294235 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.294373 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.294412 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.298919 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.299600 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e289f880-3998-42a1-8ed2-1d0e1f356d36-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.299835 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.302728 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.312965 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.314896 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5cxw\" (UniqueName: \"kubernetes.io/projected/e289f880-3998-42a1-8ed2-1d0e1f356d36-kube-api-access-z5cxw\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.374997 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:41 crc kubenswrapper[4813]: I0217 09:09:41.902210 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:09:42 crc kubenswrapper[4813]: I0217 09:09:42.012918 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"6a688c77-0e32-4f07-91f3-9a69d7b26e66","Type":"ContainerStarted","Data":"c35168afb00c4646f41b37faa3f37ae1fededa83bebceb68dd206001f73bf3f3"} Feb 17 09:09:42 crc kubenswrapper[4813]: I0217 09:09:42.022521 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ed2dfee8-4b50-469d-9ee6-036af68b084e","Type":"ContainerStarted","Data":"1cc4bca3d382b5e45f3ff845c5da9148ee580c6fbd206075c8a272bd77cf19e8"} Feb 17 09:09:42 crc kubenswrapper[4813]: I0217 09:09:42.024457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e289f880-3998-42a1-8ed2-1d0e1f356d36","Type":"ContainerStarted","Data":"8e10edf8241d1be74dc5249d7cca5f9061bf37446385f7e007c003848b87c126"} Feb 17 09:09:43 crc kubenswrapper[4813]: I0217 09:09:43.038948 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"6a688c77-0e32-4f07-91f3-9a69d7b26e66","Type":"ContainerStarted","Data":"2cba32984b362663d70076f4cebab9828a2858207bc2c1471b152360a4ca406d"} Feb 17 09:09:43 crc kubenswrapper[4813]: I0217 09:09:43.039442 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"6a688c77-0e32-4f07-91f3-9a69d7b26e66","Type":"ContainerStarted","Data":"fb41d8641cc2e931c946d0cfd6bb86f825103c005bf5bbc33b39a8dd7f8dea22"} Feb 17 09:09:43 crc kubenswrapper[4813]: I0217 09:09:43.041642 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ed2dfee8-4b50-469d-9ee6-036af68b084e","Type":"ContainerStarted","Data":"4252bb72f6507c5e549dd44be11920c60765dddf21c39f9806dee53569b2748b"} Feb 17 09:09:43 crc kubenswrapper[4813]: I0217 09:09:43.043407 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e289f880-3998-42a1-8ed2-1d0e1f356d36","Type":"ContainerStarted","Data":"fadbb3eecc9ef00ab4e6bfa08fd1bda9314e101dfca6977a026936c9fc0b1f73"} Feb 17 09:09:43 crc kubenswrapper[4813]: I0217 09:09:43.067840 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-scheduler-0" podStartSLOduration=3.067824569 podStartE2EDuration="3.067824569s" podCreationTimestamp="2026-02-17 09:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:09:43.060753348 +0000 UTC m=+1730.721514571" watchObservedRunningTime="2026-02-17 09:09:43.067824569 +0000 UTC m=+1730.728585782" Feb 17 09:09:43 crc kubenswrapper[4813]: I0217 09:09:43.082677 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.082664801 podStartE2EDuration="2.082664801s" podCreationTimestamp="2026-02-17 09:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:09:43.079912902 +0000 UTC m=+1730.740674135" watchObservedRunningTime="2026-02-17 09:09:43.082664801 +0000 UTC m=+1730.743426024" Feb 17 09:09:43 crc kubenswrapper[4813]: I0217 09:09:43.659080 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:09:44 crc kubenswrapper[4813]: I0217 09:09:44.066806 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ed2dfee8-4b50-469d-9ee6-036af68b084e","Type":"ContainerStarted","Data":"7fa9305a62afa4dc4cf81bb60321e338fa262debbe0f6973d16f4309dbbad5fe"} Feb 17 09:09:44 crc kubenswrapper[4813]: I0217 09:09:44.066847 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ed2dfee8-4b50-469d-9ee6-036af68b084e","Type":"ContainerStarted","Data":"738d1f8f4effb08da491817e26b0449ea359e015429272539a71203c8c98a239"} Feb 17 09:09:44 crc kubenswrapper[4813]: I0217 09:09:44.626356 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:44 crc kubenswrapper[4813]: I0217 09:09:44.856171 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:09:45 crc kubenswrapper[4813]: I0217 09:09:45.111802 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:09:45 crc kubenswrapper[4813]: E0217 09:09:45.112893 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:09:45 crc kubenswrapper[4813]: I0217 09:09:45.595920 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:46 crc kubenswrapper[4813]: I0217 09:09:46.087655 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ed2dfee8-4b50-469d-9ee6-036af68b084e","Type":"ContainerStarted","Data":"006775ce68d1da5c94d4aefcda941200d9f9f11e8cd40fca287332e8a6131350"} Feb 17 09:09:46 crc kubenswrapper[4813]: I0217 09:09:46.088719 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:46 crc kubenswrapper[4813]: I0217 09:09:46.099268 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:09:46 crc kubenswrapper[4813]: I0217 09:09:46.126941 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.399741531 podStartE2EDuration="6.126925911s" podCreationTimestamp="2026-02-17 09:09:40 +0000 UTC" firstStartedPulling="2026-02-17 09:09:41.144785944 +0000 UTC m=+1728.805547167" lastFinishedPulling="2026-02-17 09:09:44.871970324 +0000 UTC m=+1732.532731547" observedRunningTime="2026-02-17 09:09:46.121649701 +0000 UTC m=+1733.782410924" watchObservedRunningTime="2026-02-17 09:09:46.126925911 +0000 UTC m=+1733.787687134" Feb 17 09:09:47 crc kubenswrapper[4813]: I0217 09:09:47.312573 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:09:48 crc kubenswrapper[4813]: I0217 09:09:48.522384 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:09:49 crc kubenswrapper[4813]: I0217 09:09:49.757509 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:09:49 crc kubenswrapper[4813]: I0217 09:09:49.874910 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:50 crc kubenswrapper[4813]: I0217 09:09:50.830995 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:50 crc kubenswrapper[4813]: I0217 09:09:50.972731 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:09:51 crc kubenswrapper[4813]: I0217 09:09:51.375664 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:51 crc kubenswrapper[4813]: I0217 09:09:51.421950 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:52 crc kubenswrapper[4813]: I0217 09:09:52.142058 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:52 crc kubenswrapper[4813]: I0217 09:09:52.192260 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:09:52 crc kubenswrapper[4813]: I0217 09:09:52.209220 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:09:53 crc kubenswrapper[4813]: I0217 09:09:53.437422 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:09:53 crc kubenswrapper[4813]: I0217 09:09:53.731810 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:09:53 crc kubenswrapper[4813]: I0217 09:09:53.740963 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-5w2sh"] Feb 17 09:09:53 crc kubenswrapper[4813]: I0217 09:09:53.750385 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-5w2sh"] Feb 17 09:09:53 crc kubenswrapper[4813]: I0217 09:09:53.777502 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 17 09:09:53 crc kubenswrapper[4813]: I0217 09:09:53.777757 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" containerName="cinder-backup" containerID="cri-o://9e39e48ba9dcf0525fe5da982d484e149c97d8836743430591806fec13184f87" gracePeriod=30 Feb 17 09:09:53 crc kubenswrapper[4813]: I0217 09:09:53.777865 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" containerName="probe" containerID="cri-o://f29d915a24400342adc62687846b6ce0ea878a68b6b1a6ba3598c97a55cad776" gracePeriod=30 Feb 17 09:09:53 crc kubenswrapper[4813]: I0217 09:09:53.794721 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 17 09:09:53 crc kubenswrapper[4813]: I0217 09:09:53.794988 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="6a688c77-0e32-4f07-91f3-9a69d7b26e66" containerName="cinder-scheduler" containerID="cri-o://fb41d8641cc2e931c946d0cfd6bb86f825103c005bf5bbc33b39a8dd7f8dea22" gracePeriod=30 Feb 17 09:09:53 crc kubenswrapper[4813]: I0217 09:09:53.795040 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="6a688c77-0e32-4f07-91f3-9a69d7b26e66" containerName="probe" containerID="cri-o://2cba32984b362663d70076f4cebab9828a2858207bc2c1471b152360a4ca406d" gracePeriod=30 Feb 17 09:09:53 crc kubenswrapper[4813]: I0217 09:09:53.861721 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder7579-account-delete-9wh5p"] Feb 17 09:09:53 crc kubenswrapper[4813]: I0217 09:09:53.863028 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder7579-account-delete-9wh5p" Feb 17 09:09:53 crc kubenswrapper[4813]: I0217 09:09:53.873576 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder7579-account-delete-9wh5p"] Feb 17 09:09:53 crc kubenswrapper[4813]: I0217 09:09:53.959328 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-strm8\" (UniqueName: \"kubernetes.io/projected/2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a-kube-api-access-strm8\") pod \"cinder7579-account-delete-9wh5p\" (UID: \"2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a\") " pod="watcher-kuttl-default/cinder7579-account-delete-9wh5p" Feb 17 09:09:53 crc kubenswrapper[4813]: I0217 09:09:53.959416 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a-operator-scripts\") pod \"cinder7579-account-delete-9wh5p\" (UID: \"2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a\") " pod="watcher-kuttl-default/cinder7579-account-delete-9wh5p" Feb 17 09:09:54 crc kubenswrapper[4813]: I0217 09:09:54.060996 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a-operator-scripts\") pod \"cinder7579-account-delete-9wh5p\" (UID: \"2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a\") " pod="watcher-kuttl-default/cinder7579-account-delete-9wh5p" Feb 17 09:09:54 crc kubenswrapper[4813]: I0217 09:09:54.061133 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-strm8\" (UniqueName: \"kubernetes.io/projected/2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a-kube-api-access-strm8\") pod \"cinder7579-account-delete-9wh5p\" (UID: \"2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a\") " pod="watcher-kuttl-default/cinder7579-account-delete-9wh5p" Feb 17 09:09:54 crc kubenswrapper[4813]: I0217 09:09:54.061761 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a-operator-scripts\") pod \"cinder7579-account-delete-9wh5p\" (UID: \"2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a\") " pod="watcher-kuttl-default/cinder7579-account-delete-9wh5p" Feb 17 09:09:54 crc kubenswrapper[4813]: I0217 09:09:54.080946 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-strm8\" (UniqueName: \"kubernetes.io/projected/2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a-kube-api-access-strm8\") pod \"cinder7579-account-delete-9wh5p\" (UID: \"2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a\") " pod="watcher-kuttl-default/cinder7579-account-delete-9wh5p" Feb 17 09:09:54 crc kubenswrapper[4813]: I0217 09:09:54.177702 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder7579-account-delete-9wh5p" Feb 17 09:09:54 crc kubenswrapper[4813]: I0217 09:09:54.711940 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder7579-account-delete-9wh5p"] Feb 17 09:09:54 crc kubenswrapper[4813]: W0217 09:09:54.735524 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c8b0b7b_cb68_4548_9dd1_bacdf5101e9a.slice/crio-c9b0fd6d13e0ad8fc74ea2a203ee08a192551136f065cec1174b28e9b49a980c WatchSource:0}: Error finding container c9b0fd6d13e0ad8fc74ea2a203ee08a192551136f065cec1174b28e9b49a980c: Status 404 returned error can't find the container with id c9b0fd6d13e0ad8fc74ea2a203ee08a192551136f065cec1174b28e9b49a980c Feb 17 09:09:54 crc kubenswrapper[4813]: I0217 09:09:54.917779 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:09:55 crc kubenswrapper[4813]: I0217 09:09:55.124996 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80820ffe-8422-4c04-b626-92f1d9f8e982" path="/var/lib/kubelet/pods/80820ffe-8422-4c04-b626-92f1d9f8e982/volumes" Feb 17 09:09:55 crc kubenswrapper[4813]: I0217 09:09:55.180457 4813 generic.go:334] "Generic (PLEG): container finished" podID="2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a" containerID="d65ae16037d07388d59fa655f4a3a12db5fa7368befd9f9601bb36e4f0ce6236" exitCode=0 Feb 17 09:09:55 crc kubenswrapper[4813]: I0217 09:09:55.180554 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder7579-account-delete-9wh5p" event={"ID":"2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a","Type":"ContainerDied","Data":"d65ae16037d07388d59fa655f4a3a12db5fa7368befd9f9601bb36e4f0ce6236"} Feb 17 09:09:55 crc kubenswrapper[4813]: I0217 09:09:55.180613 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder7579-account-delete-9wh5p" event={"ID":"2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a","Type":"ContainerStarted","Data":"c9b0fd6d13e0ad8fc74ea2a203ee08a192551136f065cec1174b28e9b49a980c"} Feb 17 09:09:55 crc kubenswrapper[4813]: I0217 09:09:55.183154 4813 generic.go:334] "Generic (PLEG): container finished" podID="ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" containerID="f29d915a24400342adc62687846b6ce0ea878a68b6b1a6ba3598c97a55cad776" exitCode=0 Feb 17 09:09:55 crc kubenswrapper[4813]: I0217 09:09:55.183189 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53","Type":"ContainerDied","Data":"f29d915a24400342adc62687846b6ce0ea878a68b6b1a6ba3598c97a55cad776"} Feb 17 09:09:55 crc kubenswrapper[4813]: I0217 09:09:55.185670 4813 generic.go:334] "Generic (PLEG): container finished" podID="6a688c77-0e32-4f07-91f3-9a69d7b26e66" containerID="2cba32984b362663d70076f4cebab9828a2858207bc2c1471b152360a4ca406d" exitCode=0 Feb 17 09:09:55 crc kubenswrapper[4813]: I0217 09:09:55.185700 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"6a688c77-0e32-4f07-91f3-9a69d7b26e66","Type":"ContainerDied","Data":"2cba32984b362663d70076f4cebab9828a2858207bc2c1471b152360a4ca406d"} Feb 17 09:09:55 crc kubenswrapper[4813]: I0217 09:09:55.581924 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:09:55 crc kubenswrapper[4813]: I0217 09:09:55.582142 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="e289f880-3998-42a1-8ed2-1d0e1f356d36" containerName="watcher-decision-engine" containerID="cri-o://fadbb3eecc9ef00ab4e6bfa08fd1bda9314e101dfca6977a026936c9fc0b1f73" gracePeriod=30 Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.105755 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.207275 4813 generic.go:334] "Generic (PLEG): container finished" podID="678e07d8-2ac8-4504-8834-685bc3a4ecfd" containerID="3c496862fe6cfa62c49ec272dfac0ce13ec3203fba44633d9d140d97167cea19" exitCode=137 Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.207362 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"678e07d8-2ac8-4504-8834-685bc3a4ecfd","Type":"ContainerDied","Data":"3c496862fe6cfa62c49ec272dfac0ce13ec3203fba44633d9d140d97167cea19"} Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.297699 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.297981 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="ceilometer-central-agent" containerID="cri-o://4252bb72f6507c5e549dd44be11920c60765dddf21c39f9806dee53569b2748b" gracePeriod=30 Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.298100 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="proxy-httpd" containerID="cri-o://006775ce68d1da5c94d4aefcda941200d9f9f11e8cd40fca287332e8a6131350" gracePeriod=30 Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.298143 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="sg-core" containerID="cri-o://7fa9305a62afa4dc4cf81bb60321e338fa262debbe0f6973d16f4309dbbad5fe" gracePeriod=30 Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.298176 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="ceilometer-notification-agent" containerID="cri-o://738d1f8f4effb08da491817e26b0449ea359e015429272539a71203c8c98a239" gracePeriod=30 Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.317048 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.235:3000/\": EOF" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.350139 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.520239 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder7579-account-delete-9wh5p" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.521035 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hxv4\" (UniqueName: \"kubernetes.io/projected/678e07d8-2ac8-4504-8834-685bc3a4ecfd-kube-api-access-9hxv4\") pod \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.521108 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-scripts\") pod \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.521154 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/678e07d8-2ac8-4504-8834-685bc3a4ecfd-etc-machine-id\") pod \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.521193 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-cert-memcached-mtls\") pod \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.521213 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-combined-ca-bundle\") pod \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.521275 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-config-data-custom\") pod \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.521312 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/678e07d8-2ac8-4504-8834-685bc3a4ecfd-logs\") pod \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.521368 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-config-data\") pod \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\" (UID: \"678e07d8-2ac8-4504-8834-685bc3a4ecfd\") " Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.526448 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "678e07d8-2ac8-4504-8834-685bc3a4ecfd" (UID: "678e07d8-2ac8-4504-8834-685bc3a4ecfd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.528751 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/678e07d8-2ac8-4504-8834-685bc3a4ecfd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "678e07d8-2ac8-4504-8834-685bc3a4ecfd" (UID: "678e07d8-2ac8-4504-8834-685bc3a4ecfd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.529003 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/678e07d8-2ac8-4504-8834-685bc3a4ecfd-logs" (OuterVolumeSpecName: "logs") pod "678e07d8-2ac8-4504-8834-685bc3a4ecfd" (UID: "678e07d8-2ac8-4504-8834-685bc3a4ecfd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.530828 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-scripts" (OuterVolumeSpecName: "scripts") pod "678e07d8-2ac8-4504-8834-685bc3a4ecfd" (UID: "678e07d8-2ac8-4504-8834-685bc3a4ecfd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.535691 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678e07d8-2ac8-4504-8834-685bc3a4ecfd-kube-api-access-9hxv4" (OuterVolumeSpecName: "kube-api-access-9hxv4") pod "678e07d8-2ac8-4504-8834-685bc3a4ecfd" (UID: "678e07d8-2ac8-4504-8834-685bc3a4ecfd"). InnerVolumeSpecName "kube-api-access-9hxv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.589266 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "678e07d8-2ac8-4504-8834-685bc3a4ecfd" (UID: "678e07d8-2ac8-4504-8834-685bc3a4ecfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.606440 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "678e07d8-2ac8-4504-8834-685bc3a4ecfd" (UID: "678e07d8-2ac8-4504-8834-685bc3a4ecfd"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.622774 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a-operator-scripts\") pod \"2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a\" (UID: \"2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a\") " Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.622872 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-strm8\" (UniqueName: \"kubernetes.io/projected/2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a-kube-api-access-strm8\") pod \"2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a\" (UID: \"2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a\") " Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.623352 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.623369 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.623378 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.623385 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/678e07d8-2ac8-4504-8834-685bc3a4ecfd-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.623396 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hxv4\" (UniqueName: \"kubernetes.io/projected/678e07d8-2ac8-4504-8834-685bc3a4ecfd-kube-api-access-9hxv4\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.623406 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.623414 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/678e07d8-2ac8-4504-8834-685bc3a4ecfd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.623607 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a" (UID: "2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.626734 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a-kube-api-access-strm8" (OuterVolumeSpecName: "kube-api-access-strm8") pod "2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a" (UID: "2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a"). InnerVolumeSpecName "kube-api-access-strm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.627034 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-config-data" (OuterVolumeSpecName: "config-data") pod "678e07d8-2ac8-4504-8834-685bc3a4ecfd" (UID: "678e07d8-2ac8-4504-8834-685bc3a4ecfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.724872 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.724911 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-strm8\" (UniqueName: \"kubernetes.io/projected/2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a-kube-api-access-strm8\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:56 crc kubenswrapper[4813]: I0217 09:09:56.724926 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/678e07d8-2ac8-4504-8834-685bc3a4ecfd-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.111380 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:09:57 crc kubenswrapper[4813]: E0217 09:09:57.112122 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.217797 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"678e07d8-2ac8-4504-8834-685bc3a4ecfd","Type":"ContainerDied","Data":"260e001ddea3004d5b6d26f60714b2e9ec2162300717ce58e469d289d4eb6b17"} Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.217838 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.217862 4813 scope.go:117] "RemoveContainer" containerID="3c496862fe6cfa62c49ec272dfac0ce13ec3203fba44633d9d140d97167cea19" Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.219819 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder7579-account-delete-9wh5p" Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.219822 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder7579-account-delete-9wh5p" event={"ID":"2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a","Type":"ContainerDied","Data":"c9b0fd6d13e0ad8fc74ea2a203ee08a192551136f065cec1174b28e9b49a980c"} Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.219861 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9b0fd6d13e0ad8fc74ea2a203ee08a192551136f065cec1174b28e9b49a980c" Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.222985 4813 generic.go:334] "Generic (PLEG): container finished" podID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerID="006775ce68d1da5c94d4aefcda941200d9f9f11e8cd40fca287332e8a6131350" exitCode=0 Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.223011 4813 generic.go:334] "Generic (PLEG): container finished" podID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerID="7fa9305a62afa4dc4cf81bb60321e338fa262debbe0f6973d16f4309dbbad5fe" exitCode=2 Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.223022 4813 generic.go:334] "Generic (PLEG): container finished" podID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerID="4252bb72f6507c5e549dd44be11920c60765dddf21c39f9806dee53569b2748b" exitCode=0 Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.223038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ed2dfee8-4b50-469d-9ee6-036af68b084e","Type":"ContainerDied","Data":"006775ce68d1da5c94d4aefcda941200d9f9f11e8cd40fca287332e8a6131350"} Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.223085 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ed2dfee8-4b50-469d-9ee6-036af68b084e","Type":"ContainerDied","Data":"7fa9305a62afa4dc4cf81bb60321e338fa262debbe0f6973d16f4309dbbad5fe"} Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.223102 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ed2dfee8-4b50-469d-9ee6-036af68b084e","Type":"ContainerDied","Data":"4252bb72f6507c5e549dd44be11920c60765dddf21c39f9806dee53569b2748b"} Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.241576 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.242996 4813 scope.go:117] "RemoveContainer" containerID="603c06b6b98e4acd098dd5dd2063edd4d2ba069856afdd0ac2826bd1163daf17" Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.255238 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Feb 17 09:09:57 crc kubenswrapper[4813]: I0217 09:09:57.294920 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.242051 4813 generic.go:334] "Generic (PLEG): container finished" podID="ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" containerID="9e39e48ba9dcf0525fe5da982d484e149c97d8836743430591806fec13184f87" exitCode=0 Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.242142 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53","Type":"ContainerDied","Data":"9e39e48ba9dcf0525fe5da982d484e149c97d8836743430591806fec13184f87"} Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.246884 4813 generic.go:334] "Generic (PLEG): container finished" podID="6a688c77-0e32-4f07-91f3-9a69d7b26e66" containerID="fb41d8641cc2e931c946d0cfd6bb86f825103c005bf5bbc33b39a8dd7f8dea22" exitCode=0 Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.246908 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"6a688c77-0e32-4f07-91f3-9a69d7b26e66","Type":"ContainerDied","Data":"fb41d8641cc2e931c946d0cfd6bb86f825103c005bf5bbc33b39a8dd7f8dea22"} Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.455724 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.596957 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.608057 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.758729 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-lib-modules\") pod \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.758780 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-combined-ca-bundle\") pod \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.758817 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-config-data-custom\") pod \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.758849 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-config-data\") pod \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.758871 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-nvme\") pod \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.758890 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-cert-memcached-mtls\") pod \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.758912 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-cert-memcached-mtls\") pod \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.758946 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a688c77-0e32-4f07-91f3-9a69d7b26e66-etc-machine-id\") pod \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.758971 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-config-data-custom\") pod \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.758998 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-locks-brick\") pod \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.759035 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-combined-ca-bundle\") pod \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.759057 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-dev\") pod \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.759074 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-scripts\") pod \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.759092 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-run\") pod \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.759113 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-config-data\") pod \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.759257 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a688c77-0e32-4f07-91f3-9a69d7b26e66-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6a688c77-0e32-4f07-91f3-9a69d7b26e66" (UID: "6a688c77-0e32-4f07-91f3-9a69d7b26e66"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.759272 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-dev" (OuterVolumeSpecName: "dev") pod "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" (UID: "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.759289 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" (UID: "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.759296 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" (UID: "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.759445 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" (UID: "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.759994 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp8t2\" (UniqueName: \"kubernetes.io/projected/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-kube-api-access-xp8t2\") pod \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760032 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-iscsi\") pod \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760068 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-lib-cinder\") pod \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760098 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98ccn\" (UniqueName: \"kubernetes.io/projected/6a688c77-0e32-4f07-91f3-9a69d7b26e66-kube-api-access-98ccn\") pod \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\" (UID: \"6a688c77-0e32-4f07-91f3-9a69d7b26e66\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760148 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-locks-cinder\") pod \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760178 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-scripts\") pod \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760194 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-machine-id\") pod \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760220 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-sys\") pod \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\" (UID: \"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53\") " Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760385 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" (UID: "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760434 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-run" (OuterVolumeSpecName: "run") pod "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" (UID: "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760479 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-sys" (OuterVolumeSpecName: "sys") pod "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" (UID: "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760646 4813 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-sys\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760658 4813 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-lib-modules\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760667 4813 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-nvme\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760675 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a688c77-0e32-4f07-91f3-9a69d7b26e66-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760685 4813 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-locks-brick\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760694 4813 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-dev\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760702 4813 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-run\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760710 4813 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-iscsi\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760733 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" (UID: "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760841 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" (UID: "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.760906 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" (UID: "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.764402 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" (UID: "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.764433 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6a688c77-0e32-4f07-91f3-9a69d7b26e66" (UID: "6a688c77-0e32-4f07-91f3-9a69d7b26e66"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.764466 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a688c77-0e32-4f07-91f3-9a69d7b26e66-kube-api-access-98ccn" (OuterVolumeSpecName: "kube-api-access-98ccn") pod "6a688c77-0e32-4f07-91f3-9a69d7b26e66" (UID: "6a688c77-0e32-4f07-91f3-9a69d7b26e66"). InnerVolumeSpecName "kube-api-access-98ccn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.767521 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-scripts" (OuterVolumeSpecName: "scripts") pod "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" (UID: "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.768434 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-kube-api-access-xp8t2" (OuterVolumeSpecName: "kube-api-access-xp8t2") pod "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" (UID: "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53"). InnerVolumeSpecName "kube-api-access-xp8t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.777542 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-scripts" (OuterVolumeSpecName: "scripts") pod "6a688c77-0e32-4f07-91f3-9a69d7b26e66" (UID: "6a688c77-0e32-4f07-91f3-9a69d7b26e66"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.857635 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a688c77-0e32-4f07-91f3-9a69d7b26e66" (UID: "6a688c77-0e32-4f07-91f3-9a69d7b26e66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.862488 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp8t2\" (UniqueName: \"kubernetes.io/projected/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-kube-api-access-xp8t2\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.862531 4813 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.862544 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98ccn\" (UniqueName: \"kubernetes.io/projected/6a688c77-0e32-4f07-91f3-9a69d7b26e66-kube-api-access-98ccn\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.862557 4813 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.862570 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.862582 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.862593 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.862604 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.862615 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.862627 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.950345 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-db-create-s5567"] Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.987571 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-db-create-s5567"] Feb 17 09:09:58 crc kubenswrapper[4813]: I0217 09:09:58.996251 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" (UID: "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.003452 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-7579-account-create-update-t76tx"] Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.020402 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-config-data" (OuterVolumeSpecName: "config-data") pod "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" (UID: "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.032365 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder7579-account-delete-9wh5p"] Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.043398 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder7579-account-delete-9wh5p"] Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.047524 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-7579-account-create-update-t76tx"] Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.056101 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-config-data" (OuterVolumeSpecName: "config-data") pod "6a688c77-0e32-4f07-91f3-9a69d7b26e66" (UID: "6a688c77-0e32-4f07-91f3-9a69d7b26e66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.065909 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.065945 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.065954 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.076420 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "6a688c77-0e32-4f07-91f3-9a69d7b26e66" (UID: "6a688c77-0e32-4f07-91f3-9a69d7b26e66"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.097800 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" (UID: "ffa73c34-bdd8-4f6d-a237-3552d6e2ae53"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.131650 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="135ce389-c973-4614-935e-7d88b1f4666c" path="/var/lib/kubelet/pods/135ce389-c973-4614-935e-7d88b1f4666c/volumes" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.132169 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a" path="/var/lib/kubelet/pods/2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a/volumes" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.132690 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="678e07d8-2ac8-4504-8834-685bc3a4ecfd" path="/var/lib/kubelet/pods/678e07d8-2ac8-4504-8834-685bc3a4ecfd/volumes" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.133763 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3253024-0605-4e0e-a476-d7947b0880ba" path="/var/lib/kubelet/pods/f3253024-0605-4e0e-a476-d7947b0880ba/volumes" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.167387 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/6a688c77-0e32-4f07-91f3-9a69d7b26e66-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.167423 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.262942 4813 generic.go:334] "Generic (PLEG): container finished" podID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerID="738d1f8f4effb08da491817e26b0449ea359e015429272539a71203c8c98a239" exitCode=0 Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.263019 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ed2dfee8-4b50-469d-9ee6-036af68b084e","Type":"ContainerDied","Data":"738d1f8f4effb08da491817e26b0449ea359e015429272539a71203c8c98a239"} Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.266152 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"ffa73c34-bdd8-4f6d-a237-3552d6e2ae53","Type":"ContainerDied","Data":"6ba0ec04d37f27ef837a3bf0bd7ca5ba5387cdd53691a3b2f5ab28a1e5fbeb80"} Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.266449 4813 scope.go:117] "RemoveContainer" containerID="f29d915a24400342adc62687846b6ce0ea878a68b6b1a6ba3598c97a55cad776" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.266930 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.274816 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"6a688c77-0e32-4f07-91f3-9a69d7b26e66","Type":"ContainerDied","Data":"c35168afb00c4646f41b37faa3f37ae1fededa83bebceb68dd206001f73bf3f3"} Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.274915 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.292574 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.310218 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.325332 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.329539 4813 scope.go:117] "RemoveContainer" containerID="9e39e48ba9dcf0525fe5da982d484e149c97d8836743430591806fec13184f87" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.331798 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.376615 4813 scope.go:117] "RemoveContainer" containerID="2cba32984b362663d70076f4cebab9828a2858207bc2c1471b152360a4ca406d" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.421262 4813 scope.go:117] "RemoveContainer" containerID="fb41d8641cc2e931c946d0cfd6bb86f825103c005bf5bbc33b39a8dd7f8dea22" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.625263 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.652562 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.778424 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-combined-ca-bundle\") pod \"ed2dfee8-4b50-469d-9ee6-036af68b084e\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.778787 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-ceilometer-tls-certs\") pod \"ed2dfee8-4b50-469d-9ee6-036af68b084e\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.779005 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29z9c\" (UniqueName: \"kubernetes.io/projected/ed2dfee8-4b50-469d-9ee6-036af68b084e-kube-api-access-29z9c\") pod \"ed2dfee8-4b50-469d-9ee6-036af68b084e\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.779182 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed2dfee8-4b50-469d-9ee6-036af68b084e-log-httpd\") pod \"ed2dfee8-4b50-469d-9ee6-036af68b084e\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.779384 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed2dfee8-4b50-469d-9ee6-036af68b084e-run-httpd\") pod \"ed2dfee8-4b50-469d-9ee6-036af68b084e\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.779532 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-config-data\") pod \"ed2dfee8-4b50-469d-9ee6-036af68b084e\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.779666 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-sg-core-conf-yaml\") pod \"ed2dfee8-4b50-469d-9ee6-036af68b084e\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.779774 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-scripts\") pod \"ed2dfee8-4b50-469d-9ee6-036af68b084e\" (UID: \"ed2dfee8-4b50-469d-9ee6-036af68b084e\") " Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.779970 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed2dfee8-4b50-469d-9ee6-036af68b084e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ed2dfee8-4b50-469d-9ee6-036af68b084e" (UID: "ed2dfee8-4b50-469d-9ee6-036af68b084e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.780264 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed2dfee8-4b50-469d-9ee6-036af68b084e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ed2dfee8-4b50-469d-9ee6-036af68b084e" (UID: "ed2dfee8-4b50-469d-9ee6-036af68b084e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.780509 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed2dfee8-4b50-469d-9ee6-036af68b084e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.780603 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed2dfee8-4b50-469d-9ee6-036af68b084e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.793952 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed2dfee8-4b50-469d-9ee6-036af68b084e-kube-api-access-29z9c" (OuterVolumeSpecName: "kube-api-access-29z9c") pod "ed2dfee8-4b50-469d-9ee6-036af68b084e" (UID: "ed2dfee8-4b50-469d-9ee6-036af68b084e"). InnerVolumeSpecName "kube-api-access-29z9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.794884 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-scripts" (OuterVolumeSpecName: "scripts") pod "ed2dfee8-4b50-469d-9ee6-036af68b084e" (UID: "ed2dfee8-4b50-469d-9ee6-036af68b084e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.804492 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ed2dfee8-4b50-469d-9ee6-036af68b084e" (UID: "ed2dfee8-4b50-469d-9ee6-036af68b084e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.840868 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ed2dfee8-4b50-469d-9ee6-036af68b084e" (UID: "ed2dfee8-4b50-469d-9ee6-036af68b084e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.861681 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-config-data" (OuterVolumeSpecName: "config-data") pod "ed2dfee8-4b50-469d-9ee6-036af68b084e" (UID: "ed2dfee8-4b50-469d-9ee6-036af68b084e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.870544 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed2dfee8-4b50-469d-9ee6-036af68b084e" (UID: "ed2dfee8-4b50-469d-9ee6-036af68b084e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.881792 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.881823 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.881836 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.881848 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.881860 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed2dfee8-4b50-469d-9ee6-036af68b084e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:09:59 crc kubenswrapper[4813]: I0217 09:09:59.881871 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29z9c\" (UniqueName: \"kubernetes.io/projected/ed2dfee8-4b50-469d-9ee6-036af68b084e-kube-api-access-29z9c\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.291732 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ed2dfee8-4b50-469d-9ee6-036af68b084e","Type":"ContainerDied","Data":"1cc4bca3d382b5e45f3ff845c5da9148ee580c6fbd206075c8a272bd77cf19e8"} Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.291783 4813 scope.go:117] "RemoveContainer" containerID="006775ce68d1da5c94d4aefcda941200d9f9f11e8cd40fca287332e8a6131350" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.291886 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.319768 4813 scope.go:117] "RemoveContainer" containerID="7fa9305a62afa4dc4cf81bb60321e338fa262debbe0f6973d16f4309dbbad5fe" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.335262 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.337757 4813 scope.go:117] "RemoveContainer" containerID="738d1f8f4effb08da491817e26b0449ea359e015429272539a71203c8c98a239" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.343446 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356177 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:00 crc kubenswrapper[4813]: E0217 09:10:00.356567 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678e07d8-2ac8-4504-8834-685bc3a4ecfd" containerName="cinder-api" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356587 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="678e07d8-2ac8-4504-8834-685bc3a4ecfd" containerName="cinder-api" Feb 17 09:10:00 crc kubenswrapper[4813]: E0217 09:10:00.356598 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a" containerName="mariadb-account-delete" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356605 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a" containerName="mariadb-account-delete" Feb 17 09:10:00 crc kubenswrapper[4813]: E0217 09:10:00.356619 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="ceilometer-notification-agent" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356626 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="ceilometer-notification-agent" Feb 17 09:10:00 crc kubenswrapper[4813]: E0217 09:10:00.356636 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="proxy-httpd" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356642 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="proxy-httpd" Feb 17 09:10:00 crc kubenswrapper[4813]: E0217 09:10:00.356652 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a688c77-0e32-4f07-91f3-9a69d7b26e66" containerName="cinder-scheduler" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356659 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a688c77-0e32-4f07-91f3-9a69d7b26e66" containerName="cinder-scheduler" Feb 17 09:10:00 crc kubenswrapper[4813]: E0217 09:10:00.356672 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" containerName="probe" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356679 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" containerName="probe" Feb 17 09:10:00 crc kubenswrapper[4813]: E0217 09:10:00.356688 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="sg-core" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356697 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="sg-core" Feb 17 09:10:00 crc kubenswrapper[4813]: E0217 09:10:00.356710 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="ceilometer-central-agent" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356717 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="ceilometer-central-agent" Feb 17 09:10:00 crc kubenswrapper[4813]: E0217 09:10:00.356729 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a688c77-0e32-4f07-91f3-9a69d7b26e66" containerName="probe" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356736 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a688c77-0e32-4f07-91f3-9a69d7b26e66" containerName="probe" Feb 17 09:10:00 crc kubenswrapper[4813]: E0217 09:10:00.356756 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" containerName="cinder-backup" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356764 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" containerName="cinder-backup" Feb 17 09:10:00 crc kubenswrapper[4813]: E0217 09:10:00.356779 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678e07d8-2ac8-4504-8834-685bc3a4ecfd" containerName="cinder-api-log" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356785 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="678e07d8-2ac8-4504-8834-685bc3a4ecfd" containerName="cinder-api-log" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356954 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" containerName="cinder-backup" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356966 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" containerName="probe" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356977 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a688c77-0e32-4f07-91f3-9a69d7b26e66" containerName="probe" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356986 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="sg-core" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.356994 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="678e07d8-2ac8-4504-8834-685bc3a4ecfd" containerName="cinder-api" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.357006 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="678e07d8-2ac8-4504-8834-685bc3a4ecfd" containerName="cinder-api-log" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.357017 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c8b0b7b-cb68-4548-9dd1-bacdf5101e9a" containerName="mariadb-account-delete" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.357026 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a688c77-0e32-4f07-91f3-9a69d7b26e66" containerName="cinder-scheduler" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.357038 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="ceilometer-notification-agent" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.357051 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="ceilometer-central-agent" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.357064 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" containerName="proxy-httpd" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.361590 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.362123 4813 scope.go:117] "RemoveContainer" containerID="4252bb72f6507c5e549dd44be11920c60765dddf21c39f9806dee53569b2748b" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.364708 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.366951 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.370992 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.382512 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.409889 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.409943 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-run-httpd\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.409964 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t67pv\" (UniqueName: \"kubernetes.io/projected/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-kube-api-access-t67pv\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.409983 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-log-httpd\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.410002 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-config-data\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.410056 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-scripts\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.410078 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.410114 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.511382 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-scripts\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.512391 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.512467 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.512556 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.512603 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-run-httpd\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.512627 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t67pv\" (UniqueName: \"kubernetes.io/projected/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-kube-api-access-t67pv\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.512657 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-log-httpd\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.512685 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-config-data\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.513064 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-run-httpd\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.513625 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-log-httpd\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.518279 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-scripts\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.518569 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.518943 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.519457 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-config-data\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.522891 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.537057 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t67pv\" (UniqueName: \"kubernetes.io/projected/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-kube-api-access-t67pv\") pod \"ceilometer-0\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.740817 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:00 crc kubenswrapper[4813]: I0217 09:10:00.834725 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:10:01 crc kubenswrapper[4813]: I0217 09:10:01.119625 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a688c77-0e32-4f07-91f3-9a69d7b26e66" path="/var/lib/kubelet/pods/6a688c77-0e32-4f07-91f3-9a69d7b26e66/volumes" Feb 17 09:10:01 crc kubenswrapper[4813]: I0217 09:10:01.120622 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed2dfee8-4b50-469d-9ee6-036af68b084e" path="/var/lib/kubelet/pods/ed2dfee8-4b50-469d-9ee6-036af68b084e/volumes" Feb 17 09:10:01 crc kubenswrapper[4813]: I0217 09:10:01.121346 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffa73c34-bdd8-4f6d-a237-3552d6e2ae53" path="/var/lib/kubelet/pods/ffa73c34-bdd8-4f6d-a237-3552d6e2ae53/volumes" Feb 17 09:10:01 crc kubenswrapper[4813]: I0217 09:10:01.222512 4813 scope.go:117] "RemoveContainer" containerID="0ed3187798b9b41634e1666debefc88606d08d759b9e38df18732b2cf37adc53" Feb 17 09:10:01 crc kubenswrapper[4813]: I0217 09:10:01.246724 4813 scope.go:117] "RemoveContainer" containerID="749f9ab04aad404668df7ac3c8235566fef6761e0e79f144881aabf316deb623" Feb 17 09:10:01 crc kubenswrapper[4813]: I0217 09:10:01.257351 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:01 crc kubenswrapper[4813]: W0217 09:10:01.258040 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a4ecabc_5224_45c7_89d7_fe4fc3c2945e.slice/crio-0e5d53de5041fc43f15672fe6c4101e99c5a271daed2db580fa52c3d0a141d8a WatchSource:0}: Error finding container 0e5d53de5041fc43f15672fe6c4101e99c5a271daed2db580fa52c3d0a141d8a: Status 404 returned error can't find the container with id 0e5d53de5041fc43f15672fe6c4101e99c5a271daed2db580fa52c3d0a141d8a Feb 17 09:10:01 crc kubenswrapper[4813]: I0217 09:10:01.303860 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e","Type":"ContainerStarted","Data":"0e5d53de5041fc43f15672fe6c4101e99c5a271daed2db580fa52c3d0a141d8a"} Feb 17 09:10:01 crc kubenswrapper[4813]: I0217 09:10:01.304640 4813 scope.go:117] "RemoveContainer" containerID="9d9b14557521d459bb2291fee6080d1db6376cd255dbe0f2f3dbe9e28d1703c3" Feb 17 09:10:01 crc kubenswrapper[4813]: I0217 09:10:01.323888 4813 scope.go:117] "RemoveContainer" containerID="7ef8b35dfbef639f7e9aea084a582cb7398265ae67d027dff7bd399a90e8ae1d" Feb 17 09:10:01 crc kubenswrapper[4813]: I0217 09:10:01.341241 4813 scope.go:117] "RemoveContainer" containerID="ff1a41b9d826bf0a13960d2a7308985735df3748702de020ae9e8a1ff3d8fe2e" Feb 17 09:10:01 crc kubenswrapper[4813]: I0217 09:10:01.370897 4813 scope.go:117] "RemoveContainer" containerID="c488a8227b16bf2d099927a19350c4626650b68d38197779b703abe7af59394f" Feb 17 09:10:01 crc kubenswrapper[4813]: I0217 09:10:01.395595 4813 scope.go:117] "RemoveContainer" containerID="8f5a36869b7208ed9002b79d5287230810331e49b113dbd534aae90d865e2529" Feb 17 09:10:01 crc kubenswrapper[4813]: I0217 09:10:01.412747 4813 scope.go:117] "RemoveContainer" containerID="d1655e624d8f359518fde38db62600d3511de599fe709779972ecb70c41196a3" Feb 17 09:10:02 crc kubenswrapper[4813]: I0217 09:10:02.040413 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:10:02 crc kubenswrapper[4813]: I0217 09:10:02.316829 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e","Type":"ContainerStarted","Data":"377866df9bb04533e7904ce04f1439c72508a6bcf520a03ffebe153f4b3e6ae7"} Feb 17 09:10:03 crc kubenswrapper[4813]: I0217 09:10:03.252769 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:10:03 crc kubenswrapper[4813]: I0217 09:10:03.330416 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e","Type":"ContainerStarted","Data":"57612cd2d863b67b8086e1cd5b47eb53895078428aaaa12bd2e32e5d5f591fcd"} Feb 17 09:10:03 crc kubenswrapper[4813]: I0217 09:10:03.330461 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e","Type":"ContainerStarted","Data":"d1fcb61369d9be6fa7ee57fe63dceb3862237c2d5f43c872740dd8fd87fbe756"} Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.453583 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_e289f880-3998-42a1-8ed2-1d0e1f356d36/watcher-decision-engine/0.log" Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.818302 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.879410 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-custom-prometheus-ca\") pod \"e289f880-3998-42a1-8ed2-1d0e1f356d36\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.879477 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5cxw\" (UniqueName: \"kubernetes.io/projected/e289f880-3998-42a1-8ed2-1d0e1f356d36-kube-api-access-z5cxw\") pod \"e289f880-3998-42a1-8ed2-1d0e1f356d36\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.879512 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e289f880-3998-42a1-8ed2-1d0e1f356d36-logs\") pod \"e289f880-3998-42a1-8ed2-1d0e1f356d36\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.879534 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-cert-memcached-mtls\") pod \"e289f880-3998-42a1-8ed2-1d0e1f356d36\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.879570 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-config-data\") pod \"e289f880-3998-42a1-8ed2-1d0e1f356d36\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.879662 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-combined-ca-bundle\") pod \"e289f880-3998-42a1-8ed2-1d0e1f356d36\" (UID: \"e289f880-3998-42a1-8ed2-1d0e1f356d36\") " Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.881438 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e289f880-3998-42a1-8ed2-1d0e1f356d36-logs" (OuterVolumeSpecName: "logs") pod "e289f880-3998-42a1-8ed2-1d0e1f356d36" (UID: "e289f880-3998-42a1-8ed2-1d0e1f356d36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.888238 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e289f880-3998-42a1-8ed2-1d0e1f356d36-kube-api-access-z5cxw" (OuterVolumeSpecName: "kube-api-access-z5cxw") pod "e289f880-3998-42a1-8ed2-1d0e1f356d36" (UID: "e289f880-3998-42a1-8ed2-1d0e1f356d36"). InnerVolumeSpecName "kube-api-access-z5cxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.908795 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e289f880-3998-42a1-8ed2-1d0e1f356d36" (UID: "e289f880-3998-42a1-8ed2-1d0e1f356d36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.927056 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "e289f880-3998-42a1-8ed2-1d0e1f356d36" (UID: "e289f880-3998-42a1-8ed2-1d0e1f356d36"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.951286 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "e289f880-3998-42a1-8ed2-1d0e1f356d36" (UID: "e289f880-3998-42a1-8ed2-1d0e1f356d36"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.954622 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-config-data" (OuterVolumeSpecName: "config-data") pod "e289f880-3998-42a1-8ed2-1d0e1f356d36" (UID: "e289f880-3998-42a1-8ed2-1d0e1f356d36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.986004 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.986067 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.986106 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5cxw\" (UniqueName: \"kubernetes.io/projected/e289f880-3998-42a1-8ed2-1d0e1f356d36-kube-api-access-z5cxw\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.986123 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e289f880-3998-42a1-8ed2-1d0e1f356d36-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.986136 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:04 crc kubenswrapper[4813]: I0217 09:10:04.986148 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e289f880-3998-42a1-8ed2-1d0e1f356d36-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.353990 4813 generic.go:334] "Generic (PLEG): container finished" podID="e289f880-3998-42a1-8ed2-1d0e1f356d36" containerID="fadbb3eecc9ef00ab4e6bfa08fd1bda9314e101dfca6977a026936c9fc0b1f73" exitCode=0 Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.354061 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e289f880-3998-42a1-8ed2-1d0e1f356d36","Type":"ContainerDied","Data":"fadbb3eecc9ef00ab4e6bfa08fd1bda9314e101dfca6977a026936c9fc0b1f73"} Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.354094 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"e289f880-3998-42a1-8ed2-1d0e1f356d36","Type":"ContainerDied","Data":"8e10edf8241d1be74dc5249d7cca5f9061bf37446385f7e007c003848b87c126"} Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.354114 4813 scope.go:117] "RemoveContainer" containerID="fadbb3eecc9ef00ab4e6bfa08fd1bda9314e101dfca6977a026936c9fc0b1f73" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.354252 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.359895 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e","Type":"ContainerStarted","Data":"c10fb5f6bf7bd19ea74c53523706321bcc7bbf701350e7b2379d8815fffd7aea"} Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.360820 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.387914 4813 scope.go:117] "RemoveContainer" containerID="fadbb3eecc9ef00ab4e6bfa08fd1bda9314e101dfca6977a026936c9fc0b1f73" Feb 17 09:10:05 crc kubenswrapper[4813]: E0217 09:10:05.388712 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fadbb3eecc9ef00ab4e6bfa08fd1bda9314e101dfca6977a026936c9fc0b1f73\": container with ID starting with fadbb3eecc9ef00ab4e6bfa08fd1bda9314e101dfca6977a026936c9fc0b1f73 not found: ID does not exist" containerID="fadbb3eecc9ef00ab4e6bfa08fd1bda9314e101dfca6977a026936c9fc0b1f73" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.388743 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fadbb3eecc9ef00ab4e6bfa08fd1bda9314e101dfca6977a026936c9fc0b1f73"} err="failed to get container status \"fadbb3eecc9ef00ab4e6bfa08fd1bda9314e101dfca6977a026936c9fc0b1f73\": rpc error: code = NotFound desc = could not find container \"fadbb3eecc9ef00ab4e6bfa08fd1bda9314e101dfca6977a026936c9fc0b1f73\": container with ID starting with fadbb3eecc9ef00ab4e6bfa08fd1bda9314e101dfca6977a026936c9fc0b1f73 not found: ID does not exist" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.400365 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.412387 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.424366 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:10:05 crc kubenswrapper[4813]: E0217 09:10:05.424775 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e289f880-3998-42a1-8ed2-1d0e1f356d36" containerName="watcher-decision-engine" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.424810 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e289f880-3998-42a1-8ed2-1d0e1f356d36" containerName="watcher-decision-engine" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.425034 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e289f880-3998-42a1-8ed2-1d0e1f356d36" containerName="watcher-decision-engine" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.425786 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.424366 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.322614777 podStartE2EDuration="5.424342541s" podCreationTimestamp="2026-02-17 09:10:00 +0000 UTC" firstStartedPulling="2026-02-17 09:10:01.261189173 +0000 UTC m=+1748.921950396" lastFinishedPulling="2026-02-17 09:10:04.362916927 +0000 UTC m=+1752.023678160" observedRunningTime="2026-02-17 09:10:05.410821187 +0000 UTC m=+1753.071582410" watchObservedRunningTime="2026-02-17 09:10:05.424342541 +0000 UTC m=+1753.085103774" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.432990 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.447979 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.492830 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.492885 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.493064 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.493183 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.493235 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.493351 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xcz2\" (UniqueName: \"kubernetes.io/projected/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-kube-api-access-5xcz2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.594999 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.595081 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.595116 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.595158 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xcz2\" (UniqueName: \"kubernetes.io/projected/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-kube-api-access-5xcz2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.595191 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.595234 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.600265 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.600550 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.600907 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.601223 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.621686 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.631222 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xcz2\" (UniqueName: \"kubernetes.io/projected/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-kube-api-access-5xcz2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:05 crc kubenswrapper[4813]: I0217 09:10:05.770794 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:06 crc kubenswrapper[4813]: I0217 09:10:06.238264 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:10:06 crc kubenswrapper[4813]: W0217 09:10:06.250657 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b154f98_5116_4b3d_a110_bbc6f07b6f6c.slice/crio-2fb7ec0451677564555fbb7d9e90f2cf930d981586b0d0ebbfe510092f0df619 WatchSource:0}: Error finding container 2fb7ec0451677564555fbb7d9e90f2cf930d981586b0d0ebbfe510092f0df619: Status 404 returned error can't find the container with id 2fb7ec0451677564555fbb7d9e90f2cf930d981586b0d0ebbfe510092f0df619 Feb 17 09:10:06 crc kubenswrapper[4813]: I0217 09:10:06.387620 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"4b154f98-5116-4b3d-a110-bbc6f07b6f6c","Type":"ContainerStarted","Data":"2fb7ec0451677564555fbb7d9e90f2cf930d981586b0d0ebbfe510092f0df619"} Feb 17 09:10:07 crc kubenswrapper[4813]: I0217 09:10:07.123346 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e289f880-3998-42a1-8ed2-1d0e1f356d36" path="/var/lib/kubelet/pods/e289f880-3998-42a1-8ed2-1d0e1f356d36/volumes" Feb 17 09:10:07 crc kubenswrapper[4813]: I0217 09:10:07.400038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"4b154f98-5116-4b3d-a110-bbc6f07b6f6c","Type":"ContainerStarted","Data":"4261478366d1e1c69c67612d09041fa2f1f5d4a8a6462b3e5fe6f3244996f0f4"} Feb 17 09:10:07 crc kubenswrapper[4813]: I0217 09:10:07.424460 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.424438728 podStartE2EDuration="2.424438728s" podCreationTimestamp="2026-02-17 09:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:10:07.417934714 +0000 UTC m=+1755.078695937" watchObservedRunningTime="2026-02-17 09:10:07.424438728 +0000 UTC m=+1755.085199971" Feb 17 09:10:07 crc kubenswrapper[4813]: I0217 09:10:07.978995 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_4b154f98-5116-4b3d-a110-bbc6f07b6f6c/watcher-decision-engine/0.log" Feb 17 09:10:09 crc kubenswrapper[4813]: I0217 09:10:09.157463 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_4b154f98-5116-4b3d-a110-bbc6f07b6f6c/watcher-decision-engine/0.log" Feb 17 09:10:10 crc kubenswrapper[4813]: I0217 09:10:10.432368 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_4b154f98-5116-4b3d-a110-bbc6f07b6f6c/watcher-decision-engine/0.log" Feb 17 09:10:11 crc kubenswrapper[4813]: I0217 09:10:11.696109 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_4b154f98-5116-4b3d-a110-bbc6f07b6f6c/watcher-decision-engine/0.log" Feb 17 09:10:12 crc kubenswrapper[4813]: I0217 09:10:12.111105 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:10:12 crc kubenswrapper[4813]: E0217 09:10:12.111344 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:10:12 crc kubenswrapper[4813]: I0217 09:10:12.952804 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_4b154f98-5116-4b3d-a110-bbc6f07b6f6c/watcher-decision-engine/0.log" Feb 17 09:10:14 crc kubenswrapper[4813]: I0217 09:10:14.186846 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_4b154f98-5116-4b3d-a110-bbc6f07b6f6c/watcher-decision-engine/0.log" Feb 17 09:10:15 crc kubenswrapper[4813]: I0217 09:10:15.418992 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_4b154f98-5116-4b3d-a110-bbc6f07b6f6c/watcher-decision-engine/0.log" Feb 17 09:10:15 crc kubenswrapper[4813]: I0217 09:10:15.771235 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:15 crc kubenswrapper[4813]: I0217 09:10:15.816947 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:16 crc kubenswrapper[4813]: I0217 09:10:16.485734 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:16 crc kubenswrapper[4813]: I0217 09:10:16.529278 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:16 crc kubenswrapper[4813]: I0217 09:10:16.643943 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_4b154f98-5116-4b3d-a110-bbc6f07b6f6c/watcher-decision-engine/0.log" Feb 17 09:10:17 crc kubenswrapper[4813]: I0217 09:10:17.871100 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_4b154f98-5116-4b3d-a110-bbc6f07b6f6c/watcher-decision-engine/0.log" Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.075988 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-d5skl"] Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.083905 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-d5skl"] Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.150822 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.159987 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher2b33-account-delete-kg2m2"] Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.161247 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2b33-account-delete-kg2m2" Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.195128 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.195405 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="a8c3c80f-dcca-441c-9d3b-a9b20a078b2f" containerName="watcher-applier" containerID="cri-o://ad51458a2103ce0886b2731c64e1675e3e4fb80670ea7978ab3494b474e3aedf" gracePeriod=30 Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.211101 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher2b33-account-delete-kg2m2"] Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.225349 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.225632 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c2c3932a-3be9-4161-b873-a8a905d7c639" containerName="watcher-kuttl-api-log" containerID="cri-o://75bc40bdc7b1f376cbecb207bcf65260e9d02d628307ff22d3d370b07a1a8f23" gracePeriod=30 Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.225774 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c2c3932a-3be9-4161-b873-a8a905d7c639" containerName="watcher-api" containerID="cri-o://a9a7ba3924ec3adf494ceee70cdbc828731839a691b733f2f60fad9aeae69812" gracePeriod=30 Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.322065 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7485a09e-b88b-48b6-ba27-0ee92958322c-operator-scripts\") pod \"watcher2b33-account-delete-kg2m2\" (UID: \"7485a09e-b88b-48b6-ba27-0ee92958322c\") " pod="watcher-kuttl-default/watcher2b33-account-delete-kg2m2" Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.322392 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqj88\" (UniqueName: \"kubernetes.io/projected/7485a09e-b88b-48b6-ba27-0ee92958322c-kube-api-access-wqj88\") pod \"watcher2b33-account-delete-kg2m2\" (UID: \"7485a09e-b88b-48b6-ba27-0ee92958322c\") " pod="watcher-kuttl-default/watcher2b33-account-delete-kg2m2" Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.423864 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqj88\" (UniqueName: \"kubernetes.io/projected/7485a09e-b88b-48b6-ba27-0ee92958322c-kube-api-access-wqj88\") pod \"watcher2b33-account-delete-kg2m2\" (UID: \"7485a09e-b88b-48b6-ba27-0ee92958322c\") " pod="watcher-kuttl-default/watcher2b33-account-delete-kg2m2" Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.424139 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7485a09e-b88b-48b6-ba27-0ee92958322c-operator-scripts\") pod \"watcher2b33-account-delete-kg2m2\" (UID: \"7485a09e-b88b-48b6-ba27-0ee92958322c\") " pod="watcher-kuttl-default/watcher2b33-account-delete-kg2m2" Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.425321 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7485a09e-b88b-48b6-ba27-0ee92958322c-operator-scripts\") pod \"watcher2b33-account-delete-kg2m2\" (UID: \"7485a09e-b88b-48b6-ba27-0ee92958322c\") " pod="watcher-kuttl-default/watcher2b33-account-delete-kg2m2" Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.457149 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqj88\" (UniqueName: \"kubernetes.io/projected/7485a09e-b88b-48b6-ba27-0ee92958322c-kube-api-access-wqj88\") pod \"watcher2b33-account-delete-kg2m2\" (UID: \"7485a09e-b88b-48b6-ba27-0ee92958322c\") " pod="watcher-kuttl-default/watcher2b33-account-delete-kg2m2" Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.486983 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2b33-account-delete-kg2m2" Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.505594 4813 generic.go:334] "Generic (PLEG): container finished" podID="c2c3932a-3be9-4161-b873-a8a905d7c639" containerID="75bc40bdc7b1f376cbecb207bcf65260e9d02d628307ff22d3d370b07a1a8f23" exitCode=143 Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.506009 4813 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-6xzxm\" not found" Feb 17 09:10:18 crc kubenswrapper[4813]: I0217 09:10:18.506193 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c2c3932a-3be9-4161-b873-a8a905d7c639","Type":"ContainerDied","Data":"75bc40bdc7b1f376cbecb207bcf65260e9d02d628307ff22d3d370b07a1a8f23"} Feb 17 09:10:18 crc kubenswrapper[4813]: E0217 09:10:18.630185 4813 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:10:18 crc kubenswrapper[4813]: E0217 09:10:18.630241 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-config-data podName:4b154f98-5116-4b3d-a110-bbc6f07b6f6c nodeName:}" failed. No retries permitted until 2026-02-17 09:10:19.130225506 +0000 UTC m=+1766.790986729 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "4b154f98-5116-4b3d-a110-bbc6f07b6f6c") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:10:19 crc kubenswrapper[4813]: I0217 09:10:19.026199 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher2b33-account-delete-kg2m2"] Feb 17 09:10:19 crc kubenswrapper[4813]: I0217 09:10:19.121812 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ac057e-5e86-48c3-b6f6-077d203c9659" path="/var/lib/kubelet/pods/d5ac057e-5e86-48c3-b6f6-077d203c9659/volumes" Feb 17 09:10:19 crc kubenswrapper[4813]: E0217 09:10:19.148754 4813 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:10:19 crc kubenswrapper[4813]: E0217 09:10:19.148832 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-config-data podName:4b154f98-5116-4b3d-a110-bbc6f07b6f6c nodeName:}" failed. No retries permitted until 2026-02-17 09:10:20.148811724 +0000 UTC m=+1767.809572947 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "4b154f98-5116-4b3d-a110-bbc6f07b6f6c") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:10:19 crc kubenswrapper[4813]: I0217 09:10:19.522196 4813 generic.go:334] "Generic (PLEG): container finished" podID="c2c3932a-3be9-4161-b873-a8a905d7c639" containerID="a9a7ba3924ec3adf494ceee70cdbc828731839a691b733f2f60fad9aeae69812" exitCode=0 Feb 17 09:10:19 crc kubenswrapper[4813]: I0217 09:10:19.522544 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c2c3932a-3be9-4161-b873-a8a905d7c639","Type":"ContainerDied","Data":"a9a7ba3924ec3adf494ceee70cdbc828731839a691b733f2f60fad9aeae69812"} Feb 17 09:10:19 crc kubenswrapper[4813]: I0217 09:10:19.522571 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c2c3932a-3be9-4161-b873-a8a905d7c639","Type":"ContainerDied","Data":"11d893c77160f6aef14c7b90a1c9f71a5c2b3c72bbebc065b8c4cdcb003abc5e"} Feb 17 09:10:19 crc kubenswrapper[4813]: I0217 09:10:19.522582 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11d893c77160f6aef14c7b90a1c9f71a5c2b3c72bbebc065b8c4cdcb003abc5e" Feb 17 09:10:19 crc kubenswrapper[4813]: I0217 09:10:19.534676 4813 generic.go:334] "Generic (PLEG): container finished" podID="a8c3c80f-dcca-441c-9d3b-a9b20a078b2f" containerID="ad51458a2103ce0886b2731c64e1675e3e4fb80670ea7978ab3494b474e3aedf" exitCode=0 Feb 17 09:10:19 crc kubenswrapper[4813]: I0217 09:10:19.534765 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f","Type":"ContainerDied","Data":"ad51458a2103ce0886b2731c64e1675e3e4fb80670ea7978ab3494b474e3aedf"} Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.551978 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="4b154f98-5116-4b3d-a110-bbc6f07b6f6c" containerName="watcher-decision-engine" containerID="cri-o://4261478366d1e1c69c67612d09041fa2f1f5d4a8a6462b3e5fe6f3244996f0f4" gracePeriod=30 Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.553513 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher2b33-account-delete-kg2m2" event={"ID":"7485a09e-b88b-48b6-ba27-0ee92958322c","Type":"ContainerStarted","Data":"d0f53a3b7998a32fa9721e65baa08a7aa6e2198a10d40c14e8ffd5cc485a20a8"} Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.553573 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher2b33-account-delete-kg2m2" event={"ID":"7485a09e-b88b-48b6-ba27-0ee92958322c","Type":"ContainerStarted","Data":"51714c3faed6918aeb33afc24219748054534253e4b9bca68b727a31d2f45a1c"} Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.578936 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher2b33-account-delete-kg2m2" podStartSLOduration=1.578917295 podStartE2EDuration="1.578917295s" podCreationTimestamp="2026-02-17 09:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:10:19.569378783 +0000 UTC m=+1767.230140026" watchObservedRunningTime="2026-02-17 09:10:19.578917295 +0000 UTC m=+1767.239678508" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.653997 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.761795 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2c3932a-3be9-4161-b873-a8a905d7c639-logs\") pod \"c2c3932a-3be9-4161-b873-a8a905d7c639\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.761866 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-config-data\") pod \"c2c3932a-3be9-4161-b873-a8a905d7c639\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.761920 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-cert-memcached-mtls\") pod \"c2c3932a-3be9-4161-b873-a8a905d7c639\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.761941 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-combined-ca-bundle\") pod \"c2c3932a-3be9-4161-b873-a8a905d7c639\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.762018 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-custom-prometheus-ca\") pod \"c2c3932a-3be9-4161-b873-a8a905d7c639\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.762076 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4cm9\" (UniqueName: \"kubernetes.io/projected/c2c3932a-3be9-4161-b873-a8a905d7c639-kube-api-access-m4cm9\") pod \"c2c3932a-3be9-4161-b873-a8a905d7c639\" (UID: \"c2c3932a-3be9-4161-b873-a8a905d7c639\") " Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.768458 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2c3932a-3be9-4161-b873-a8a905d7c639-logs" (OuterVolumeSpecName: "logs") pod "c2c3932a-3be9-4161-b873-a8a905d7c639" (UID: "c2c3932a-3be9-4161-b873-a8a905d7c639"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.769437 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c3932a-3be9-4161-b873-a8a905d7c639-kube-api-access-m4cm9" (OuterVolumeSpecName: "kube-api-access-m4cm9") pod "c2c3932a-3be9-4161-b873-a8a905d7c639" (UID: "c2c3932a-3be9-4161-b873-a8a905d7c639"). InnerVolumeSpecName "kube-api-access-m4cm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.851540 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2c3932a-3be9-4161-b873-a8a905d7c639" (UID: "c2c3932a-3be9-4161-b873-a8a905d7c639"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.853377 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c2c3932a-3be9-4161-b873-a8a905d7c639" (UID: "c2c3932a-3be9-4161-b873-a8a905d7c639"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.872990 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.873599 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.873615 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4cm9\" (UniqueName: \"kubernetes.io/projected/c2c3932a-3be9-4161-b873-a8a905d7c639-kube-api-access-m4cm9\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.873624 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2c3932a-3be9-4161-b873-a8a905d7c639-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.873634 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.888439 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-config-data" (OuterVolumeSpecName: "config-data") pod "c2c3932a-3be9-4161-b873-a8a905d7c639" (UID: "c2c3932a-3be9-4161-b873-a8a905d7c639"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.904435 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "c2c3932a-3be9-4161-b873-a8a905d7c639" (UID: "c2c3932a-3be9-4161-b873-a8a905d7c639"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.974520 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjbsm\" (UniqueName: \"kubernetes.io/projected/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-kube-api-access-gjbsm\") pod \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.974611 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-config-data\") pod \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.974661 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-cert-memcached-mtls\") pod \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.974702 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-combined-ca-bundle\") pod \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.974760 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-logs\") pod \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\" (UID: \"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f\") " Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.975135 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.975146 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c2c3932a-3be9-4161-b873-a8a905d7c639-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.975165 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-logs" (OuterVolumeSpecName: "logs") pod "a8c3c80f-dcca-441c-9d3b-a9b20a078b2f" (UID: "a8c3c80f-dcca-441c-9d3b-a9b20a078b2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.978045 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-kube-api-access-gjbsm" (OuterVolumeSpecName: "kube-api-access-gjbsm") pod "a8c3c80f-dcca-441c-9d3b-a9b20a078b2f" (UID: "a8c3c80f-dcca-441c-9d3b-a9b20a078b2f"). InnerVolumeSpecName "kube-api-access-gjbsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:19.993923 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8c3c80f-dcca-441c-9d3b-a9b20a078b2f" (UID: "a8c3c80f-dcca-441c-9d3b-a9b20a078b2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.010614 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-config-data" (OuterVolumeSpecName: "config-data") pod "a8c3c80f-dcca-441c-9d3b-a9b20a078b2f" (UID: "a8c3c80f-dcca-441c-9d3b-a9b20a078b2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.042462 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "a8c3c80f-dcca-441c-9d3b-a9b20a078b2f" (UID: "a8c3c80f-dcca-441c-9d3b-a9b20a078b2f"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.076981 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjbsm\" (UniqueName: \"kubernetes.io/projected/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-kube-api-access-gjbsm\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.077014 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.077029 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.077040 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.077053 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:20 crc kubenswrapper[4813]: E0217 09:10:20.180119 4813 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:10:20 crc kubenswrapper[4813]: E0217 09:10:20.180194 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-config-data podName:4b154f98-5116-4b3d-a110-bbc6f07b6f6c nodeName:}" failed. No retries permitted until 2026-02-17 09:10:22.180171013 +0000 UTC m=+1769.840932246 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "4b154f98-5116-4b3d-a110-bbc6f07b6f6c") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.566302 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a8c3c80f-dcca-441c-9d3b-a9b20a078b2f","Type":"ContainerDied","Data":"06f834cedbda38210c8a09df50244bcd8934ff0e14d05f0c28cb4fbbf00b3a8b"} Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.566375 4813 scope.go:117] "RemoveContainer" containerID="ad51458a2103ce0886b2731c64e1675e3e4fb80670ea7978ab3494b474e3aedf" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.566410 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.568591 4813 generic.go:334] "Generic (PLEG): container finished" podID="7485a09e-b88b-48b6-ba27-0ee92958322c" containerID="d0f53a3b7998a32fa9721e65baa08a7aa6e2198a10d40c14e8ffd5cc485a20a8" exitCode=0 Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.568645 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher2b33-account-delete-kg2m2" event={"ID":"7485a09e-b88b-48b6-ba27-0ee92958322c","Type":"ContainerDied","Data":"d0f53a3b7998a32fa9721e65baa08a7aa6e2198a10d40c14e8ffd5cc485a20a8"} Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.568699 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.577977 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.578427 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="ceilometer-central-agent" containerID="cri-o://377866df9bb04533e7904ce04f1439c72508a6bcf520a03ffebe153f4b3e6ae7" gracePeriod=30 Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.579029 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="proxy-httpd" containerID="cri-o://c10fb5f6bf7bd19ea74c53523706321bcc7bbf701350e7b2379d8815fffd7aea" gracePeriod=30 Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.579075 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="ceilometer-notification-agent" containerID="cri-o://d1fcb61369d9be6fa7ee57fe63dceb3862237c2d5f43c872740dd8fd87fbe756" gracePeriod=30 Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.579048 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="sg-core" containerID="cri-o://57612cd2d863b67b8086e1cd5b47eb53895078428aaaa12bd2e32e5d5f591fcd" gracePeriod=30 Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.610665 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.239:3000/\": EOF" Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.634158 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.640957 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.649606 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:10:20 crc kubenswrapper[4813]: I0217 09:10:20.655571 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:10:21 crc kubenswrapper[4813]: I0217 09:10:21.123033 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8c3c80f-dcca-441c-9d3b-a9b20a078b2f" path="/var/lib/kubelet/pods/a8c3c80f-dcca-441c-9d3b-a9b20a078b2f/volumes" Feb 17 09:10:21 crc kubenswrapper[4813]: I0217 09:10:21.124584 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2c3932a-3be9-4161-b873-a8a905d7c639" path="/var/lib/kubelet/pods/c2c3932a-3be9-4161-b873-a8a905d7c639/volumes" Feb 17 09:10:21 crc kubenswrapper[4813]: I0217 09:10:21.579266 4813 generic.go:334] "Generic (PLEG): container finished" podID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerID="c10fb5f6bf7bd19ea74c53523706321bcc7bbf701350e7b2379d8815fffd7aea" exitCode=0 Feb 17 09:10:21 crc kubenswrapper[4813]: I0217 09:10:21.579299 4813 generic.go:334] "Generic (PLEG): container finished" podID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerID="57612cd2d863b67b8086e1cd5b47eb53895078428aaaa12bd2e32e5d5f591fcd" exitCode=2 Feb 17 09:10:21 crc kubenswrapper[4813]: I0217 09:10:21.579331 4813 generic.go:334] "Generic (PLEG): container finished" podID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerID="377866df9bb04533e7904ce04f1439c72508a6bcf520a03ffebe153f4b3e6ae7" exitCode=0 Feb 17 09:10:21 crc kubenswrapper[4813]: I0217 09:10:21.579336 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e","Type":"ContainerDied","Data":"c10fb5f6bf7bd19ea74c53523706321bcc7bbf701350e7b2379d8815fffd7aea"} Feb 17 09:10:21 crc kubenswrapper[4813]: I0217 09:10:21.579366 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e","Type":"ContainerDied","Data":"57612cd2d863b67b8086e1cd5b47eb53895078428aaaa12bd2e32e5d5f591fcd"} Feb 17 09:10:21 crc kubenswrapper[4813]: I0217 09:10:21.579377 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e","Type":"ContainerDied","Data":"377866df9bb04533e7904ce04f1439c72508a6bcf520a03ffebe153f4b3e6ae7"} Feb 17 09:10:21 crc kubenswrapper[4813]: I0217 09:10:21.959613 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2b33-account-delete-kg2m2" Feb 17 09:10:22 crc kubenswrapper[4813]: I0217 09:10:22.012207 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqj88\" (UniqueName: \"kubernetes.io/projected/7485a09e-b88b-48b6-ba27-0ee92958322c-kube-api-access-wqj88\") pod \"7485a09e-b88b-48b6-ba27-0ee92958322c\" (UID: \"7485a09e-b88b-48b6-ba27-0ee92958322c\") " Feb 17 09:10:22 crc kubenswrapper[4813]: I0217 09:10:22.012366 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7485a09e-b88b-48b6-ba27-0ee92958322c-operator-scripts\") pod \"7485a09e-b88b-48b6-ba27-0ee92958322c\" (UID: \"7485a09e-b88b-48b6-ba27-0ee92958322c\") " Feb 17 09:10:22 crc kubenswrapper[4813]: I0217 09:10:22.012788 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7485a09e-b88b-48b6-ba27-0ee92958322c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7485a09e-b88b-48b6-ba27-0ee92958322c" (UID: "7485a09e-b88b-48b6-ba27-0ee92958322c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:10:22 crc kubenswrapper[4813]: I0217 09:10:22.026754 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7485a09e-b88b-48b6-ba27-0ee92958322c-kube-api-access-wqj88" (OuterVolumeSpecName: "kube-api-access-wqj88") pod "7485a09e-b88b-48b6-ba27-0ee92958322c" (UID: "7485a09e-b88b-48b6-ba27-0ee92958322c"). InnerVolumeSpecName "kube-api-access-wqj88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:10:22 crc kubenswrapper[4813]: I0217 09:10:22.114443 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqj88\" (UniqueName: \"kubernetes.io/projected/7485a09e-b88b-48b6-ba27-0ee92958322c-kube-api-access-wqj88\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:22 crc kubenswrapper[4813]: I0217 09:10:22.114499 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7485a09e-b88b-48b6-ba27-0ee92958322c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:22 crc kubenswrapper[4813]: E0217 09:10:22.216793 4813 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:10:22 crc kubenswrapper[4813]: E0217 09:10:22.216862 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-config-data podName:4b154f98-5116-4b3d-a110-bbc6f07b6f6c nodeName:}" failed. No retries permitted until 2026-02-17 09:10:26.21684286 +0000 UTC m=+1773.877604183 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "4b154f98-5116-4b3d-a110-bbc6f07b6f6c") : secret "watcher-kuttl-decision-engine-config-data" not found Feb 17 09:10:22 crc kubenswrapper[4813]: I0217 09:10:22.593523 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher2b33-account-delete-kg2m2" event={"ID":"7485a09e-b88b-48b6-ba27-0ee92958322c","Type":"ContainerDied","Data":"51714c3faed6918aeb33afc24219748054534253e4b9bca68b727a31d2f45a1c"} Feb 17 09:10:22 crc kubenswrapper[4813]: I0217 09:10:22.593594 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51714c3faed6918aeb33afc24219748054534253e4b9bca68b727a31d2f45a1c" Feb 17 09:10:22 crc kubenswrapper[4813]: I0217 09:10:22.593554 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher2b33-account-delete-kg2m2" Feb 17 09:10:23 crc kubenswrapper[4813]: I0217 09:10:23.115955 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:10:23 crc kubenswrapper[4813]: E0217 09:10:23.116218 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:10:23 crc kubenswrapper[4813]: I0217 09:10:23.187136 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-n8n86"] Feb 17 09:10:23 crc kubenswrapper[4813]: I0217 09:10:23.200718 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-n8n86"] Feb 17 09:10:23 crc kubenswrapper[4813]: I0217 09:10:23.214659 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher2b33-account-delete-kg2m2"] Feb 17 09:10:23 crc kubenswrapper[4813]: I0217 09:10:23.222579 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher2b33-account-delete-kg2m2"] Feb 17 09:10:23 crc kubenswrapper[4813]: I0217 09:10:23.229584 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg"] Feb 17 09:10:23 crc kubenswrapper[4813]: I0217 09:10:23.236857 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-2b33-account-create-update-h8lqg"] Feb 17 09:10:23 crc kubenswrapper[4813]: I0217 09:10:23.606089 4813 generic.go:334] "Generic (PLEG): container finished" podID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerID="d1fcb61369d9be6fa7ee57fe63dceb3862237c2d5f43c872740dd8fd87fbe756" exitCode=0 Feb 17 09:10:23 crc kubenswrapper[4813]: I0217 09:10:23.606165 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e","Type":"ContainerDied","Data":"d1fcb61369d9be6fa7ee57fe63dceb3862237c2d5f43c872740dd8fd87fbe756"} Feb 17 09:10:23 crc kubenswrapper[4813]: I0217 09:10:23.990146 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.051576 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-scripts\") pod \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.051738 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-sg-core-conf-yaml\") pod \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.051797 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t67pv\" (UniqueName: \"kubernetes.io/projected/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-kube-api-access-t67pv\") pod \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.051832 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-run-httpd\") pod \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.051886 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-combined-ca-bundle\") pod \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.051911 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-log-httpd\") pod \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.051935 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-config-data\") pod \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.051981 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-ceilometer-tls-certs\") pod \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\" (UID: \"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e\") " Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.052393 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" (UID: "2a4ecabc-5224-45c7-89d7-fe4fc3c2945e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.053296 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" (UID: "2a4ecabc-5224-45c7-89d7-fe4fc3c2945e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.062627 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-kube-api-access-t67pv" (OuterVolumeSpecName: "kube-api-access-t67pv") pod "2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" (UID: "2a4ecabc-5224-45c7-89d7-fe4fc3c2945e"). InnerVolumeSpecName "kube-api-access-t67pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.063449 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-scripts" (OuterVolumeSpecName: "scripts") pod "2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" (UID: "2a4ecabc-5224-45c7-89d7-fe4fc3c2945e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.089543 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" (UID: "2a4ecabc-5224-45c7-89d7-fe4fc3c2945e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.100424 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.118904 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" (UID: "2a4ecabc-5224-45c7-89d7-fe4fc3c2945e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.149881 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" (UID: "2a4ecabc-5224-45c7-89d7-fe4fc3c2945e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.152971 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-logs\") pod \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.153034 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-cert-memcached-mtls\") pod \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.153078 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-combined-ca-bundle\") pod \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.153118 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-custom-prometheus-ca\") pod \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.153144 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xcz2\" (UniqueName: \"kubernetes.io/projected/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-kube-api-access-5xcz2\") pod \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.153167 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-config-data\") pod \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\" (UID: \"4b154f98-5116-4b3d-a110-bbc6f07b6f6c\") " Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.153398 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-logs" (OuterVolumeSpecName: "logs") pod "4b154f98-5116-4b3d-a110-bbc6f07b6f6c" (UID: "4b154f98-5116-4b3d-a110-bbc6f07b6f6c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.153862 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.153883 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.153895 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.153906 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.153916 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.153927 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.153939 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t67pv\" (UniqueName: \"kubernetes.io/projected/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-kube-api-access-t67pv\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.153951 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.166408 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-kube-api-access-5xcz2" (OuterVolumeSpecName: "kube-api-access-5xcz2") pod "4b154f98-5116-4b3d-a110-bbc6f07b6f6c" (UID: "4b154f98-5116-4b3d-a110-bbc6f07b6f6c"). InnerVolumeSpecName "kube-api-access-5xcz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.183881 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b154f98-5116-4b3d-a110-bbc6f07b6f6c" (UID: "4b154f98-5116-4b3d-a110-bbc6f07b6f6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.185468 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "4b154f98-5116-4b3d-a110-bbc6f07b6f6c" (UID: "4b154f98-5116-4b3d-a110-bbc6f07b6f6c"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.201237 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-config-data" (OuterVolumeSpecName: "config-data") pod "2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" (UID: "2a4ecabc-5224-45c7-89d7-fe4fc3c2945e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.202079 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-config-data" (OuterVolumeSpecName: "config-data") pod "4b154f98-5116-4b3d-a110-bbc6f07b6f6c" (UID: "4b154f98-5116-4b3d-a110-bbc6f07b6f6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.232803 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "4b154f98-5116-4b3d-a110-bbc6f07b6f6c" (UID: "4b154f98-5116-4b3d-a110-bbc6f07b6f6c"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.255243 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.255329 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.255345 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xcz2\" (UniqueName: \"kubernetes.io/projected/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-kube-api-access-5xcz2\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.255360 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.255399 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.255411 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4b154f98-5116-4b3d-a110-bbc6f07b6f6c-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.617112 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"2a4ecabc-5224-45c7-89d7-fe4fc3c2945e","Type":"ContainerDied","Data":"0e5d53de5041fc43f15672fe6c4101e99c5a271daed2db580fa52c3d0a141d8a"} Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.617193 4813 scope.go:117] "RemoveContainer" containerID="c10fb5f6bf7bd19ea74c53523706321bcc7bbf701350e7b2379d8815fffd7aea" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.617908 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.619204 4813 generic.go:334] "Generic (PLEG): container finished" podID="4b154f98-5116-4b3d-a110-bbc6f07b6f6c" containerID="4261478366d1e1c69c67612d09041fa2f1f5d4a8a6462b3e5fe6f3244996f0f4" exitCode=0 Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.619256 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"4b154f98-5116-4b3d-a110-bbc6f07b6f6c","Type":"ContainerDied","Data":"4261478366d1e1c69c67612d09041fa2f1f5d4a8a6462b3e5fe6f3244996f0f4"} Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.619288 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"4b154f98-5116-4b3d-a110-bbc6f07b6f6c","Type":"ContainerDied","Data":"2fb7ec0451677564555fbb7d9e90f2cf930d981586b0d0ebbfe510092f0df619"} Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.619377 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.657502 4813 scope.go:117] "RemoveContainer" containerID="57612cd2d863b67b8086e1cd5b47eb53895078428aaaa12bd2e32e5d5f591fcd" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.662467 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.667908 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.703651 4813 scope.go:117] "RemoveContainer" containerID="d1fcb61369d9be6fa7ee57fe63dceb3862237c2d5f43c872740dd8fd87fbe756" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.705018 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.711673 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.718410 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:24 crc kubenswrapper[4813]: E0217 09:10:24.718768 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c3932a-3be9-4161-b873-a8a905d7c639" containerName="watcher-kuttl-api-log" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.718789 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c3932a-3be9-4161-b873-a8a905d7c639" containerName="watcher-kuttl-api-log" Feb 17 09:10:24 crc kubenswrapper[4813]: E0217 09:10:24.718801 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c3c80f-dcca-441c-9d3b-a9b20a078b2f" containerName="watcher-applier" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.718809 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c3c80f-dcca-441c-9d3b-a9b20a078b2f" containerName="watcher-applier" Feb 17 09:10:24 crc kubenswrapper[4813]: E0217 09:10:24.718820 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="proxy-httpd" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.718829 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="proxy-httpd" Feb 17 09:10:24 crc kubenswrapper[4813]: E0217 09:10:24.718838 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7485a09e-b88b-48b6-ba27-0ee92958322c" containerName="mariadb-account-delete" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.718846 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7485a09e-b88b-48b6-ba27-0ee92958322c" containerName="mariadb-account-delete" Feb 17 09:10:24 crc kubenswrapper[4813]: E0217 09:10:24.718868 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c3932a-3be9-4161-b873-a8a905d7c639" containerName="watcher-api" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.718876 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c3932a-3be9-4161-b873-a8a905d7c639" containerName="watcher-api" Feb 17 09:10:24 crc kubenswrapper[4813]: E0217 09:10:24.718889 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="sg-core" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.718897 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="sg-core" Feb 17 09:10:24 crc kubenswrapper[4813]: E0217 09:10:24.718921 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="ceilometer-notification-agent" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.718928 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="ceilometer-notification-agent" Feb 17 09:10:24 crc kubenswrapper[4813]: E0217 09:10:24.718939 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b154f98-5116-4b3d-a110-bbc6f07b6f6c" containerName="watcher-decision-engine" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.718947 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b154f98-5116-4b3d-a110-bbc6f07b6f6c" containerName="watcher-decision-engine" Feb 17 09:10:24 crc kubenswrapper[4813]: E0217 09:10:24.718962 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="ceilometer-central-agent" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.718970 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="ceilometer-central-agent" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.719149 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="ceilometer-notification-agent" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.719208 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="sg-core" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.719228 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c3932a-3be9-4161-b873-a8a905d7c639" containerName="watcher-api" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.719237 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="proxy-httpd" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.719262 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7485a09e-b88b-48b6-ba27-0ee92958322c" containerName="mariadb-account-delete" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.719273 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" containerName="ceilometer-central-agent" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.719282 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c3c80f-dcca-441c-9d3b-a9b20a078b2f" containerName="watcher-applier" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.719293 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b154f98-5116-4b3d-a110-bbc6f07b6f6c" containerName="watcher-decision-engine" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.719302 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c3932a-3be9-4161-b873-a8a905d7c639" containerName="watcher-kuttl-api-log" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.721120 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.723619 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.727456 4813 scope.go:117] "RemoveContainer" containerID="377866df9bb04533e7904ce04f1439c72508a6bcf520a03ffebe153f4b3e6ae7" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.727865 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.728235 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.728574 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.761151 4813 scope.go:117] "RemoveContainer" containerID="4261478366d1e1c69c67612d09041fa2f1f5d4a8a6462b3e5fe6f3244996f0f4" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.765939 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-log-httpd\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.765974 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-run-httpd\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.765995 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.766063 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.766094 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-scripts\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.766116 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jn5b\" (UniqueName: \"kubernetes.io/projected/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-kube-api-access-5jn5b\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.766385 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-config-data\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.766446 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.781347 4813 scope.go:117] "RemoveContainer" containerID="4261478366d1e1c69c67612d09041fa2f1f5d4a8a6462b3e5fe6f3244996f0f4" Feb 17 09:10:24 crc kubenswrapper[4813]: E0217 09:10:24.781847 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4261478366d1e1c69c67612d09041fa2f1f5d4a8a6462b3e5fe6f3244996f0f4\": container with ID starting with 4261478366d1e1c69c67612d09041fa2f1f5d4a8a6462b3e5fe6f3244996f0f4 not found: ID does not exist" containerID="4261478366d1e1c69c67612d09041fa2f1f5d4a8a6462b3e5fe6f3244996f0f4" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.781875 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4261478366d1e1c69c67612d09041fa2f1f5d4a8a6462b3e5fe6f3244996f0f4"} err="failed to get container status \"4261478366d1e1c69c67612d09041fa2f1f5d4a8a6462b3e5fe6f3244996f0f4\": rpc error: code = NotFound desc = could not find container \"4261478366d1e1c69c67612d09041fa2f1f5d4a8a6462b3e5fe6f3244996f0f4\": container with ID starting with 4261478366d1e1c69c67612d09041fa2f1f5d4a8a6462b3e5fe6f3244996f0f4 not found: ID does not exist" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.867861 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.867930 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-scripts\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.867963 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jn5b\" (UniqueName: \"kubernetes.io/projected/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-kube-api-access-5jn5b\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.867989 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-config-data\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.868040 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.868077 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-log-httpd\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.868097 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-run-httpd\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.868117 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.868817 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-log-httpd\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.869186 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-run-httpd\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.872131 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.872485 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-scripts\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.873094 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.882001 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.882699 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-config-data\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:24 crc kubenswrapper[4813]: I0217 09:10:24.886769 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jn5b\" (UniqueName: \"kubernetes.io/projected/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-kube-api-access-5jn5b\") pod \"ceilometer-0\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.059456 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.123782 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9" path="/var/lib/kubelet/pods/16a87d42-f2b1-4d2c-b23e-3a34a10cdfe9/volumes" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.124644 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a4ecabc-5224-45c7-89d7-fe4fc3c2945e" path="/var/lib/kubelet/pods/2a4ecabc-5224-45c7-89d7-fe4fc3c2945e/volumes" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.125346 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b154f98-5116-4b3d-a110-bbc6f07b6f6c" path="/var/lib/kubelet/pods/4b154f98-5116-4b3d-a110-bbc6f07b6f6c/volumes" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.126221 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7485a09e-b88b-48b6-ba27-0ee92958322c" path="/var/lib/kubelet/pods/7485a09e-b88b-48b6-ba27-0ee92958322c/volumes" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.130042 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd68cea-e70d-4575-83e6-5bf81ede6566" path="/var/lib/kubelet/pods/afd68cea-e70d-4575-83e6-5bf81ede6566/volumes" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.149649 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd"] Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.150693 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.153253 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.172320 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvltb\" (UniqueName: \"kubernetes.io/projected/5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609-kube-api-access-dvltb\") pod \"watcher-95cc-account-create-update-bl7xd\" (UID: \"5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609\") " pod="watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.172486 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609-operator-scripts\") pod \"watcher-95cc-account-create-update-bl7xd\" (UID: \"5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609\") " pod="watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.175332 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-hf9z7"] Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.176255 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-hf9z7" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.212350 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-hf9z7"] Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.219405 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd"] Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.274123 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3170e2b7-c40d-4f17-8ef2-98415ccd30ea-operator-scripts\") pod \"watcher-db-create-hf9z7\" (UID: \"3170e2b7-c40d-4f17-8ef2-98415ccd30ea\") " pod="watcher-kuttl-default/watcher-db-create-hf9z7" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.274441 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvltb\" (UniqueName: \"kubernetes.io/projected/5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609-kube-api-access-dvltb\") pod \"watcher-95cc-account-create-update-bl7xd\" (UID: \"5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609\") " pod="watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.274501 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66bhb\" (UniqueName: \"kubernetes.io/projected/3170e2b7-c40d-4f17-8ef2-98415ccd30ea-kube-api-access-66bhb\") pod \"watcher-db-create-hf9z7\" (UID: \"3170e2b7-c40d-4f17-8ef2-98415ccd30ea\") " pod="watcher-kuttl-default/watcher-db-create-hf9z7" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.274675 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609-operator-scripts\") pod \"watcher-95cc-account-create-update-bl7xd\" (UID: \"5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609\") " pod="watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.275384 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609-operator-scripts\") pod \"watcher-95cc-account-create-update-bl7xd\" (UID: \"5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609\") " pod="watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.292592 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvltb\" (UniqueName: \"kubernetes.io/projected/5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609-kube-api-access-dvltb\") pod \"watcher-95cc-account-create-update-bl7xd\" (UID: \"5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609\") " pod="watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.375771 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3170e2b7-c40d-4f17-8ef2-98415ccd30ea-operator-scripts\") pod \"watcher-db-create-hf9z7\" (UID: \"3170e2b7-c40d-4f17-8ef2-98415ccd30ea\") " pod="watcher-kuttl-default/watcher-db-create-hf9z7" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.375862 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66bhb\" (UniqueName: \"kubernetes.io/projected/3170e2b7-c40d-4f17-8ef2-98415ccd30ea-kube-api-access-66bhb\") pod \"watcher-db-create-hf9z7\" (UID: \"3170e2b7-c40d-4f17-8ef2-98415ccd30ea\") " pod="watcher-kuttl-default/watcher-db-create-hf9z7" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.376608 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3170e2b7-c40d-4f17-8ef2-98415ccd30ea-operator-scripts\") pod \"watcher-db-create-hf9z7\" (UID: \"3170e2b7-c40d-4f17-8ef2-98415ccd30ea\") " pod="watcher-kuttl-default/watcher-db-create-hf9z7" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.392719 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66bhb\" (UniqueName: \"kubernetes.io/projected/3170e2b7-c40d-4f17-8ef2-98415ccd30ea-kube-api-access-66bhb\") pod \"watcher-db-create-hf9z7\" (UID: \"3170e2b7-c40d-4f17-8ef2-98415ccd30ea\") " pod="watcher-kuttl-default/watcher-db-create-hf9z7" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.466370 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.509660 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-hf9z7" Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.558798 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:25 crc kubenswrapper[4813]: I0217 09:10:25.685269 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e","Type":"ContainerStarted","Data":"c29cbda82d664a0773a4b082651a8d6131401a6a8f4f5d1fd3a20cf9fa710bdd"} Feb 17 09:10:26 crc kubenswrapper[4813]: I0217 09:10:26.061257 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd"] Feb 17 09:10:26 crc kubenswrapper[4813]: W0217 09:10:26.065365 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3170e2b7_c40d_4f17_8ef2_98415ccd30ea.slice/crio-3d3c37d053408f59729705b222a6e9891dfcc44329587c21c4159d05f47d4870 WatchSource:0}: Error finding container 3d3c37d053408f59729705b222a6e9891dfcc44329587c21c4159d05f47d4870: Status 404 returned error can't find the container with id 3d3c37d053408f59729705b222a6e9891dfcc44329587c21c4159d05f47d4870 Feb 17 09:10:26 crc kubenswrapper[4813]: I0217 09:10:26.069378 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-hf9z7"] Feb 17 09:10:26 crc kubenswrapper[4813]: I0217 09:10:26.694920 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e","Type":"ContainerStarted","Data":"840ec26f06b6c84b7060d8117eb1e876ea0a2344ae80e2348607e7a639b47dba"} Feb 17 09:10:26 crc kubenswrapper[4813]: I0217 09:10:26.696812 4813 generic.go:334] "Generic (PLEG): container finished" podID="3170e2b7-c40d-4f17-8ef2-98415ccd30ea" containerID="2e77aedca53d58f712e0dfc24784cd0e9d2a2c746518db068ebbf91e849174a8" exitCode=0 Feb 17 09:10:26 crc kubenswrapper[4813]: I0217 09:10:26.696863 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-hf9z7" event={"ID":"3170e2b7-c40d-4f17-8ef2-98415ccd30ea","Type":"ContainerDied","Data":"2e77aedca53d58f712e0dfc24784cd0e9d2a2c746518db068ebbf91e849174a8"} Feb 17 09:10:26 crc kubenswrapper[4813]: I0217 09:10:26.696881 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-hf9z7" event={"ID":"3170e2b7-c40d-4f17-8ef2-98415ccd30ea","Type":"ContainerStarted","Data":"3d3c37d053408f59729705b222a6e9891dfcc44329587c21c4159d05f47d4870"} Feb 17 09:10:26 crc kubenswrapper[4813]: I0217 09:10:26.698570 4813 generic.go:334] "Generic (PLEG): container finished" podID="5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609" containerID="871acd3e2a2905b85697d728b6c800260b473d6e2c132ddd69180d24f03b974f" exitCode=0 Feb 17 09:10:26 crc kubenswrapper[4813]: I0217 09:10:26.698604 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd" event={"ID":"5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609","Type":"ContainerDied","Data":"871acd3e2a2905b85697d728b6c800260b473d6e2c132ddd69180d24f03b974f"} Feb 17 09:10:26 crc kubenswrapper[4813]: I0217 09:10:26.698621 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd" event={"ID":"5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609","Type":"ContainerStarted","Data":"466ef70e9d43aa1b6b097c15eb82dc16d3e61d726193455a60cd02cc36655300"} Feb 17 09:10:27 crc kubenswrapper[4813]: I0217 09:10:27.709126 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e","Type":"ContainerStarted","Data":"71ca0ba88773075ce09fc90a6d8abe5dd8b1326d84e508c7237cae5cd523c29f"} Feb 17 09:10:27 crc kubenswrapper[4813]: I0217 09:10:27.709449 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e","Type":"ContainerStarted","Data":"ea865e6a7b4ee72ac3fb32659199a2976aeb564f13280b17fe93a40c7a64e023"} Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.118224 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-hf9z7" Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.121569 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd" Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.234522 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609-operator-scripts\") pod \"5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609\" (UID: \"5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609\") " Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.234594 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3170e2b7-c40d-4f17-8ef2-98415ccd30ea-operator-scripts\") pod \"3170e2b7-c40d-4f17-8ef2-98415ccd30ea\" (UID: \"3170e2b7-c40d-4f17-8ef2-98415ccd30ea\") " Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.234668 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66bhb\" (UniqueName: \"kubernetes.io/projected/3170e2b7-c40d-4f17-8ef2-98415ccd30ea-kube-api-access-66bhb\") pod \"3170e2b7-c40d-4f17-8ef2-98415ccd30ea\" (UID: \"3170e2b7-c40d-4f17-8ef2-98415ccd30ea\") " Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.234753 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvltb\" (UniqueName: \"kubernetes.io/projected/5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609-kube-api-access-dvltb\") pod \"5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609\" (UID: \"5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609\") " Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.235125 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3170e2b7-c40d-4f17-8ef2-98415ccd30ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3170e2b7-c40d-4f17-8ef2-98415ccd30ea" (UID: "3170e2b7-c40d-4f17-8ef2-98415ccd30ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.235258 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609" (UID: "5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.239777 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3170e2b7-c40d-4f17-8ef2-98415ccd30ea-kube-api-access-66bhb" (OuterVolumeSpecName: "kube-api-access-66bhb") pod "3170e2b7-c40d-4f17-8ef2-98415ccd30ea" (UID: "3170e2b7-c40d-4f17-8ef2-98415ccd30ea"). InnerVolumeSpecName "kube-api-access-66bhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.239837 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609-kube-api-access-dvltb" (OuterVolumeSpecName: "kube-api-access-dvltb") pod "5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609" (UID: "5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609"). InnerVolumeSpecName "kube-api-access-dvltb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.336506 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.336545 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3170e2b7-c40d-4f17-8ef2-98415ccd30ea-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.336563 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66bhb\" (UniqueName: \"kubernetes.io/projected/3170e2b7-c40d-4f17-8ef2-98415ccd30ea-kube-api-access-66bhb\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.336581 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvltb\" (UniqueName: \"kubernetes.io/projected/5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609-kube-api-access-dvltb\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.720229 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd" event={"ID":"5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609","Type":"ContainerDied","Data":"466ef70e9d43aa1b6b097c15eb82dc16d3e61d726193455a60cd02cc36655300"} Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.720575 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="466ef70e9d43aa1b6b097c15eb82dc16d3e61d726193455a60cd02cc36655300" Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.720667 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd" Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.734455 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-hf9z7" event={"ID":"3170e2b7-c40d-4f17-8ef2-98415ccd30ea","Type":"ContainerDied","Data":"3d3c37d053408f59729705b222a6e9891dfcc44329587c21c4159d05f47d4870"} Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.734495 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d3c37d053408f59729705b222a6e9891dfcc44329587c21c4159d05f47d4870" Feb 17 09:10:28 crc kubenswrapper[4813]: I0217 09:10:28.734514 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-hf9z7" Feb 17 09:10:29 crc kubenswrapper[4813]: I0217 09:10:29.766089 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e","Type":"ContainerStarted","Data":"79a23cd987d25b4ba8cf7db51491fd07edd567d46ae3155b711c83e467100ac1"} Feb 17 09:10:29 crc kubenswrapper[4813]: I0217 09:10:29.767886 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:29 crc kubenswrapper[4813]: I0217 09:10:29.798362 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.780690003 podStartE2EDuration="5.798343345s" podCreationTimestamp="2026-02-17 09:10:24 +0000 UTC" firstStartedPulling="2026-02-17 09:10:25.576275913 +0000 UTC m=+1773.237037136" lastFinishedPulling="2026-02-17 09:10:28.593929255 +0000 UTC m=+1776.254690478" observedRunningTime="2026-02-17 09:10:29.793550198 +0000 UTC m=+1777.454311501" watchObservedRunningTime="2026-02-17 09:10:29.798343345 +0000 UTC m=+1777.459104568" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.418800 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k"] Feb 17 09:10:30 crc kubenswrapper[4813]: E0217 09:10:30.419174 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3170e2b7-c40d-4f17-8ef2-98415ccd30ea" containerName="mariadb-database-create" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.419195 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3170e2b7-c40d-4f17-8ef2-98415ccd30ea" containerName="mariadb-database-create" Feb 17 09:10:30 crc kubenswrapper[4813]: E0217 09:10:30.419210 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609" containerName="mariadb-account-create-update" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.419218 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609" containerName="mariadb-account-create-update" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.419784 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609" containerName="mariadb-account-create-update" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.419818 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3170e2b7-c40d-4f17-8ef2-98415ccd30ea" containerName="mariadb-database-create" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.420486 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.422161 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.422710 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-zbhsq" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.434706 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k"] Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.481206 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhppz\" (UniqueName: \"kubernetes.io/projected/fc14c901-a140-4930-a64a-074ffd207f24-kube-api-access-hhppz\") pod \"watcher-kuttl-db-sync-4tt6k\" (UID: \"fc14c901-a140-4930-a64a-074ffd207f24\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.481344 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-4tt6k\" (UID: \"fc14c901-a140-4930-a64a-074ffd207f24\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.481387 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-db-sync-config-data\") pod \"watcher-kuttl-db-sync-4tt6k\" (UID: \"fc14c901-a140-4930-a64a-074ffd207f24\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.481584 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-config-data\") pod \"watcher-kuttl-db-sync-4tt6k\" (UID: \"fc14c901-a140-4930-a64a-074ffd207f24\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.583535 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhppz\" (UniqueName: \"kubernetes.io/projected/fc14c901-a140-4930-a64a-074ffd207f24-kube-api-access-hhppz\") pod \"watcher-kuttl-db-sync-4tt6k\" (UID: \"fc14c901-a140-4930-a64a-074ffd207f24\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.583665 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-4tt6k\" (UID: \"fc14c901-a140-4930-a64a-074ffd207f24\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.583693 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-db-sync-config-data\") pod \"watcher-kuttl-db-sync-4tt6k\" (UID: \"fc14c901-a140-4930-a64a-074ffd207f24\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.583744 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-config-data\") pod \"watcher-kuttl-db-sync-4tt6k\" (UID: \"fc14c901-a140-4930-a64a-074ffd207f24\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.588551 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-db-sync-config-data\") pod \"watcher-kuttl-db-sync-4tt6k\" (UID: \"fc14c901-a140-4930-a64a-074ffd207f24\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.588795 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-config-data\") pod \"watcher-kuttl-db-sync-4tt6k\" (UID: \"fc14c901-a140-4930-a64a-074ffd207f24\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.600013 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-4tt6k\" (UID: \"fc14c901-a140-4930-a64a-074ffd207f24\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.600778 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhppz\" (UniqueName: \"kubernetes.io/projected/fc14c901-a140-4930-a64a-074ffd207f24-kube-api-access-hhppz\") pod \"watcher-kuttl-db-sync-4tt6k\" (UID: \"fc14c901-a140-4930-a64a-074ffd207f24\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" Feb 17 09:10:30 crc kubenswrapper[4813]: I0217 09:10:30.738382 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" Feb 17 09:10:31 crc kubenswrapper[4813]: I0217 09:10:31.209889 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k"] Feb 17 09:10:31 crc kubenswrapper[4813]: W0217 09:10:31.212980 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc14c901_a140_4930_a64a_074ffd207f24.slice/crio-c28cac8925cc0102d06fe87d6a28d322ab4843dcadae87167a67cac399fa1009 WatchSource:0}: Error finding container c28cac8925cc0102d06fe87d6a28d322ab4843dcadae87167a67cac399fa1009: Status 404 returned error can't find the container with id c28cac8925cc0102d06fe87d6a28d322ab4843dcadae87167a67cac399fa1009 Feb 17 09:10:31 crc kubenswrapper[4813]: I0217 09:10:31.788609 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" event={"ID":"fc14c901-a140-4930-a64a-074ffd207f24","Type":"ContainerStarted","Data":"da8582cc36a23d8436bd804d1ea620cad0761d76d9d55fbecb5a7848ec6ec8ca"} Feb 17 09:10:31 crc kubenswrapper[4813]: I0217 09:10:31.788973 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" event={"ID":"fc14c901-a140-4930-a64a-074ffd207f24","Type":"ContainerStarted","Data":"c28cac8925cc0102d06fe87d6a28d322ab4843dcadae87167a67cac399fa1009"} Feb 17 09:10:31 crc kubenswrapper[4813]: I0217 09:10:31.808429 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" podStartSLOduration=1.808410165 podStartE2EDuration="1.808410165s" podCreationTimestamp="2026-02-17 09:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:10:31.803536417 +0000 UTC m=+1779.464297640" watchObservedRunningTime="2026-02-17 09:10:31.808410165 +0000 UTC m=+1779.469171388" Feb 17 09:10:33 crc kubenswrapper[4813]: I0217 09:10:33.839452 4813 generic.go:334] "Generic (PLEG): container finished" podID="fc14c901-a140-4930-a64a-074ffd207f24" containerID="da8582cc36a23d8436bd804d1ea620cad0761d76d9d55fbecb5a7848ec6ec8ca" exitCode=0 Feb 17 09:10:33 crc kubenswrapper[4813]: I0217 09:10:33.839554 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" event={"ID":"fc14c901-a140-4930-a64a-074ffd207f24","Type":"ContainerDied","Data":"da8582cc36a23d8436bd804d1ea620cad0761d76d9d55fbecb5a7848ec6ec8ca"} Feb 17 09:10:35 crc kubenswrapper[4813]: I0217 09:10:35.184853 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" Feb 17 09:10:35 crc kubenswrapper[4813]: I0217 09:10:35.276797 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhppz\" (UniqueName: \"kubernetes.io/projected/fc14c901-a140-4930-a64a-074ffd207f24-kube-api-access-hhppz\") pod \"fc14c901-a140-4930-a64a-074ffd207f24\" (UID: \"fc14c901-a140-4930-a64a-074ffd207f24\") " Feb 17 09:10:35 crc kubenswrapper[4813]: I0217 09:10:35.276854 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-combined-ca-bundle\") pod \"fc14c901-a140-4930-a64a-074ffd207f24\" (UID: \"fc14c901-a140-4930-a64a-074ffd207f24\") " Feb 17 09:10:35 crc kubenswrapper[4813]: I0217 09:10:35.276901 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-config-data\") pod \"fc14c901-a140-4930-a64a-074ffd207f24\" (UID: \"fc14c901-a140-4930-a64a-074ffd207f24\") " Feb 17 09:10:35 crc kubenswrapper[4813]: I0217 09:10:35.277044 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-db-sync-config-data\") pod \"fc14c901-a140-4930-a64a-074ffd207f24\" (UID: \"fc14c901-a140-4930-a64a-074ffd207f24\") " Feb 17 09:10:35 crc kubenswrapper[4813]: I0217 09:10:35.302862 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fc14c901-a140-4930-a64a-074ffd207f24" (UID: "fc14c901-a140-4930-a64a-074ffd207f24"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:35 crc kubenswrapper[4813]: I0217 09:10:35.303522 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc14c901-a140-4930-a64a-074ffd207f24-kube-api-access-hhppz" (OuterVolumeSpecName: "kube-api-access-hhppz") pod "fc14c901-a140-4930-a64a-074ffd207f24" (UID: "fc14c901-a140-4930-a64a-074ffd207f24"). InnerVolumeSpecName "kube-api-access-hhppz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:10:35 crc kubenswrapper[4813]: I0217 09:10:35.330707 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-config-data" (OuterVolumeSpecName: "config-data") pod "fc14c901-a140-4930-a64a-074ffd207f24" (UID: "fc14c901-a140-4930-a64a-074ffd207f24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:35 crc kubenswrapper[4813]: I0217 09:10:35.331573 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc14c901-a140-4930-a64a-074ffd207f24" (UID: "fc14c901-a140-4930-a64a-074ffd207f24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:35 crc kubenswrapper[4813]: I0217 09:10:35.378900 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:35 crc kubenswrapper[4813]: I0217 09:10:35.379205 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:35 crc kubenswrapper[4813]: I0217 09:10:35.379278 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc14c901-a140-4930-a64a-074ffd207f24-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:35 crc kubenswrapper[4813]: I0217 09:10:35.379390 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhppz\" (UniqueName: \"kubernetes.io/projected/fc14c901-a140-4930-a64a-074ffd207f24-kube-api-access-hhppz\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:35 crc kubenswrapper[4813]: I0217 09:10:35.867333 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" event={"ID":"fc14c901-a140-4930-a64a-074ffd207f24","Type":"ContainerDied","Data":"c28cac8925cc0102d06fe87d6a28d322ab4843dcadae87167a67cac399fa1009"} Feb 17 09:10:35 crc kubenswrapper[4813]: I0217 09:10:35.867375 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c28cac8925cc0102d06fe87d6a28d322ab4843dcadae87167a67cac399fa1009" Feb 17 09:10:35 crc kubenswrapper[4813]: I0217 09:10:35.867422 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.267705 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:10:36 crc kubenswrapper[4813]: E0217 09:10:36.268146 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc14c901-a140-4930-a64a-074ffd207f24" containerName="watcher-kuttl-db-sync" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.268165 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc14c901-a140-4930-a64a-074ffd207f24" containerName="watcher-kuttl-db-sync" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.268399 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc14c901-a140-4930-a64a-074ffd207f24" containerName="watcher-kuttl-db-sync" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.269405 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.272132 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.272975 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-zbhsq" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.275361 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.276700 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.285666 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.310000 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.360408 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.361296 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.362714 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.370592 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.371556 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.377583 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.378474 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.388116 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.393265 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.393350 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f01fe088-dd12-4772-93a5-03e4c0cee445-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.393371 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg8qm\" (UniqueName: \"kubernetes.io/projected/f01fe088-dd12-4772-93a5-03e4c0cee445-kube-api-access-fg8qm\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.393395 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15c14ce6-cc09-4b5d-8b16-8c831809e097-logs\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.393411 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.393427 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.393458 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.393475 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.393502 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.393520 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.393556 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.393573 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfztc\" (UniqueName: \"kubernetes.io/projected/15c14ce6-cc09-4b5d-8b16-8c831809e097-kube-api-access-mfztc\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.494889 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.494948 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.494988 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495011 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfztc\" (UniqueName: \"kubernetes.io/projected/15c14ce6-cc09-4b5d-8b16-8c831809e097-kube-api-access-mfztc\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495049 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495097 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495141 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f01fe088-dd12-4772-93a5-03e4c0cee445-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495156 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg8qm\" (UniqueName: \"kubernetes.io/projected/f01fe088-dd12-4772-93a5-03e4c0cee445-kube-api-access-fg8qm\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495175 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ae336ae-5af7-44f0-aa57-e1ab5da43770-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495193 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495220 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15c14ce6-cc09-4b5d-8b16-8c831809e097-logs\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495238 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495256 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495272 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495298 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbxtw\" (UniqueName: \"kubernetes.io/projected/9ae336ae-5af7-44f0-aa57-e1ab5da43770-kube-api-access-bbxtw\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495333 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495351 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495454 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495534 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495617 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15c14ce6-cc09-4b5d-8b16-8c831809e097-logs\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495623 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f01fe088-dd12-4772-93a5-03e4c0cee445-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495676 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495728 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495753 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d97c34c8-66f9-406a-b988-14449fcc40b0-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.495780 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88qxc\" (UniqueName: \"kubernetes.io/projected/d97c34c8-66f9-406a-b988-14449fcc40b0-kube-api-access-88qxc\") pod \"watcher-kuttl-applier-0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.498909 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.499329 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.499333 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.499854 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.507538 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.507659 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.507895 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.512032 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.512569 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg8qm\" (UniqueName: \"kubernetes.io/projected/f01fe088-dd12-4772-93a5-03e4c0cee445-kube-api-access-fg8qm\") pod \"watcher-kuttl-api-0\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.513758 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfztc\" (UniqueName: \"kubernetes.io/projected/15c14ce6-cc09-4b5d-8b16-8c831809e097-kube-api-access-mfztc\") pod \"watcher-kuttl-api-1\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.589678 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.597858 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.597971 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.598028 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbxtw\" (UniqueName: \"kubernetes.io/projected/9ae336ae-5af7-44f0-aa57-e1ab5da43770-kube-api-access-bbxtw\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.598071 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.598117 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.598172 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d97c34c8-66f9-406a-b988-14449fcc40b0-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.598207 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88qxc\" (UniqueName: \"kubernetes.io/projected/d97c34c8-66f9-406a-b988-14449fcc40b0-kube-api-access-88qxc\") pod \"watcher-kuttl-applier-0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.598261 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.598338 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.598398 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.598481 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ae336ae-5af7-44f0-aa57-e1ab5da43770-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.598935 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d97c34c8-66f9-406a-b988-14449fcc40b0-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.599189 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ae336ae-5af7-44f0-aa57-e1ab5da43770-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.599913 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.604815 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.605251 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.605738 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.606424 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.606858 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.606973 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.610449 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.618474 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88qxc\" (UniqueName: \"kubernetes.io/projected/d97c34c8-66f9-406a-b988-14449fcc40b0-kube-api-access-88qxc\") pod \"watcher-kuttl-applier-0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.636702 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbxtw\" (UniqueName: \"kubernetes.io/projected/9ae336ae-5af7-44f0-aa57-e1ab5da43770-kube-api-access-bbxtw\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.689239 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:36 crc kubenswrapper[4813]: I0217 09:10:36.707577 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.112051 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:10:37 crc kubenswrapper[4813]: E0217 09:10:37.112376 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.130120 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.318153 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.393808 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.409923 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.887701 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9ae336ae-5af7-44f0-aa57-e1ab5da43770","Type":"ContainerStarted","Data":"a00c3d6ed08e7ae45f28d7867973a21bb17ff5b6b51617afdc73f41a3b92de6a"} Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.887749 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9ae336ae-5af7-44f0-aa57-e1ab5da43770","Type":"ContainerStarted","Data":"024f429e92b57dddff910cec52f49ec6abf24245342601f762c41197ddd0cfe5"} Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.889615 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f01fe088-dd12-4772-93a5-03e4c0cee445","Type":"ContainerStarted","Data":"4706e9317130c5b9d1dba9c7dcb0dd4612a41b59b42108cb509cd26ecd8d8ce2"} Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.889645 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f01fe088-dd12-4772-93a5-03e4c0cee445","Type":"ContainerStarted","Data":"e895350aa8b6859369d733ebaf5e899a96e527cf731a88f17ae36ef54eeacc35"} Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.889655 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f01fe088-dd12-4772-93a5-03e4c0cee445","Type":"ContainerStarted","Data":"c6d13ceb806558c941d6653330579a3b13341cf4866c42a9069452b57d3425df"} Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.889865 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.891580 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"15c14ce6-cc09-4b5d-8b16-8c831809e097","Type":"ContainerStarted","Data":"3cef7a096374e68894fcbfda9d1ad13de4be936e3fe232da5e40a4cfc32cf219"} Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.891615 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"15c14ce6-cc09-4b5d-8b16-8c831809e097","Type":"ContainerStarted","Data":"c03871266e4a32a1009bdf365eca13d4791746c307a65c4a8aa14fc2e4d2d614"} Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.891629 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"15c14ce6-cc09-4b5d-8b16-8c831809e097","Type":"ContainerStarted","Data":"b882fb05b8fd52a07285883c7fe006db1ac185a8852da0d9d71d00079205000c"} Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.891759 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.892454 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="15c14ce6-cc09-4b5d-8b16-8c831809e097" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.247:9322/\": dial tcp 10.217.0.247:9322: connect: connection refused" Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.893390 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d97c34c8-66f9-406a-b988-14449fcc40b0","Type":"ContainerStarted","Data":"75cc0ae3fe13e81663970579bc25f71b8594bcf2fce98fabd410109b65fd1a04"} Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.893421 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d97c34c8-66f9-406a-b988-14449fcc40b0","Type":"ContainerStarted","Data":"b27010c21a5e125451336558e84a8135ae84cf6b067c39ed8525145b89272284"} Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.912863 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.912844657 podStartE2EDuration="1.912844657s" podCreationTimestamp="2026-02-17 09:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:10:37.910912782 +0000 UTC m=+1785.571674005" watchObservedRunningTime="2026-02-17 09:10:37.912844657 +0000 UTC m=+1785.573605890" Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.960646 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.960627926 podStartE2EDuration="1.960627926s" podCreationTimestamp="2026-02-17 09:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:10:37.958926227 +0000 UTC m=+1785.619687450" watchObservedRunningTime="2026-02-17 09:10:37.960627926 +0000 UTC m=+1785.621389139" Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.964646 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=1.964633399 podStartE2EDuration="1.964633399s" podCreationTimestamp="2026-02-17 09:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:10:37.928224634 +0000 UTC m=+1785.588985857" watchObservedRunningTime="2026-02-17 09:10:37.964633399 +0000 UTC m=+1785.625394622" Feb 17 09:10:37 crc kubenswrapper[4813]: I0217 09:10:37.985836 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.985817932 podStartE2EDuration="1.985817932s" podCreationTimestamp="2026-02-17 09:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:10:37.982265651 +0000 UTC m=+1785.643026874" watchObservedRunningTime="2026-02-17 09:10:37.985817932 +0000 UTC m=+1785.646579155" Feb 17 09:10:39 crc kubenswrapper[4813]: I0217 09:10:39.991105 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:41 crc kubenswrapper[4813]: I0217 09:10:41.367801 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:41 crc kubenswrapper[4813]: I0217 09:10:41.590612 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:41 crc kubenswrapper[4813]: I0217 09:10:41.600378 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:41 crc kubenswrapper[4813]: I0217 09:10:41.693027 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:46 crc kubenswrapper[4813]: I0217 09:10:46.589998 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:46 crc kubenswrapper[4813]: I0217 09:10:46.596283 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:46 crc kubenswrapper[4813]: I0217 09:10:46.600770 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:46 crc kubenswrapper[4813]: I0217 09:10:46.607757 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:46 crc kubenswrapper[4813]: I0217 09:10:46.693285 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:46 crc kubenswrapper[4813]: I0217 09:10:46.709302 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:46 crc kubenswrapper[4813]: I0217 09:10:46.718621 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:46 crc kubenswrapper[4813]: I0217 09:10:46.755387 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:46 crc kubenswrapper[4813]: I0217 09:10:46.978582 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:46 crc kubenswrapper[4813]: I0217 09:10:46.990551 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:10:46 crc kubenswrapper[4813]: I0217 09:10:46.993004 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:10:47 crc kubenswrapper[4813]: I0217 09:10:47.023404 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:10:47 crc kubenswrapper[4813]: I0217 09:10:47.031008 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:10:49 crc kubenswrapper[4813]: I0217 09:10:49.022231 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:49 crc kubenswrapper[4813]: I0217 09:10:49.023071 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="proxy-httpd" containerID="cri-o://79a23cd987d25b4ba8cf7db51491fd07edd567d46ae3155b711c83e467100ac1" gracePeriod=30 Feb 17 09:10:49 crc kubenswrapper[4813]: I0217 09:10:49.023406 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="ceilometer-notification-agent" containerID="cri-o://ea865e6a7b4ee72ac3fb32659199a2976aeb564f13280b17fe93a40c7a64e023" gracePeriod=30 Feb 17 09:10:49 crc kubenswrapper[4813]: I0217 09:10:49.023392 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="sg-core" containerID="cri-o://71ca0ba88773075ce09fc90a6d8abe5dd8b1326d84e508c7237cae5cd523c29f" gracePeriod=30 Feb 17 09:10:49 crc kubenswrapper[4813]: I0217 09:10:49.023014 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="ceilometer-central-agent" containerID="cri-o://840ec26f06b6c84b7060d8117eb1e876ea0a2344ae80e2348607e7a639b47dba" gracePeriod=30 Feb 17 09:10:49 crc kubenswrapper[4813]: I0217 09:10:49.034630 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.242:3000/\": EOF" Feb 17 09:10:50 crc kubenswrapper[4813]: I0217 09:10:50.015789 4813 generic.go:334] "Generic (PLEG): container finished" podID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerID="79a23cd987d25b4ba8cf7db51491fd07edd567d46ae3155b711c83e467100ac1" exitCode=0 Feb 17 09:10:50 crc kubenswrapper[4813]: I0217 09:10:50.016159 4813 generic.go:334] "Generic (PLEG): container finished" podID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerID="71ca0ba88773075ce09fc90a6d8abe5dd8b1326d84e508c7237cae5cd523c29f" exitCode=2 Feb 17 09:10:50 crc kubenswrapper[4813]: I0217 09:10:50.015863 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e","Type":"ContainerDied","Data":"79a23cd987d25b4ba8cf7db51491fd07edd567d46ae3155b711c83e467100ac1"} Feb 17 09:10:50 crc kubenswrapper[4813]: I0217 09:10:50.016202 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e","Type":"ContainerDied","Data":"71ca0ba88773075ce09fc90a6d8abe5dd8b1326d84e508c7237cae5cd523c29f"} Feb 17 09:10:50 crc kubenswrapper[4813]: I0217 09:10:50.016213 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e","Type":"ContainerDied","Data":"840ec26f06b6c84b7060d8117eb1e876ea0a2344ae80e2348607e7a639b47dba"} Feb 17 09:10:50 crc kubenswrapper[4813]: I0217 09:10:50.016174 4813 generic.go:334] "Generic (PLEG): container finished" podID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerID="840ec26f06b6c84b7060d8117eb1e876ea0a2344ae80e2348607e7a639b47dba" exitCode=0 Feb 17 09:10:52 crc kubenswrapper[4813]: I0217 09:10:52.112458 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:10:52 crc kubenswrapper[4813]: E0217 09:10:52.115031 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.048010 4813 generic.go:334] "Generic (PLEG): container finished" podID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerID="ea865e6a7b4ee72ac3fb32659199a2976aeb564f13280b17fe93a40c7a64e023" exitCode=0 Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.048067 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e","Type":"ContainerDied","Data":"ea865e6a7b4ee72ac3fb32659199a2976aeb564f13280b17fe93a40c7a64e023"} Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.166475 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.349860 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-scripts\") pod \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.349948 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-run-httpd\") pod \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.349994 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-config-data\") pod \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.350031 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-combined-ca-bundle\") pod \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.350090 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-sg-core-conf-yaml\") pod \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.350118 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-ceilometer-tls-certs\") pod \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.350192 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jn5b\" (UniqueName: \"kubernetes.io/projected/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-kube-api-access-5jn5b\") pod \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.350227 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-log-httpd\") pod \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\" (UID: \"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e\") " Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.352945 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" (UID: "3ca13cf5-6e58-428e-a1a1-0d79bb38b43e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.352940 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" (UID: "3ca13cf5-6e58-428e-a1a1-0d79bb38b43e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.376237 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-kube-api-access-5jn5b" (OuterVolumeSpecName: "kube-api-access-5jn5b") pod "3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" (UID: "3ca13cf5-6e58-428e-a1a1-0d79bb38b43e"). InnerVolumeSpecName "kube-api-access-5jn5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.400993 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-scripts" (OuterVolumeSpecName: "scripts") pod "3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" (UID: "3ca13cf5-6e58-428e-a1a1-0d79bb38b43e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.455497 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.455527 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.455537 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jn5b\" (UniqueName: \"kubernetes.io/projected/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-kube-api-access-5jn5b\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.455545 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.471845 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" (UID: "3ca13cf5-6e58-428e-a1a1-0d79bb38b43e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.498432 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" (UID: "3ca13cf5-6e58-428e-a1a1-0d79bb38b43e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.528280 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" (UID: "3ca13cf5-6e58-428e-a1a1-0d79bb38b43e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.536411 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-config-data" (OuterVolumeSpecName: "config-data") pod "3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" (UID: "3ca13cf5-6e58-428e-a1a1-0d79bb38b43e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.558234 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.558282 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.558295 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:53 crc kubenswrapper[4813]: I0217 09:10:53.558323 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.060829 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3ca13cf5-6e58-428e-a1a1-0d79bb38b43e","Type":"ContainerDied","Data":"c29cbda82d664a0773a4b082651a8d6131401a6a8f4f5d1fd3a20cf9fa710bdd"} Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.061142 4813 scope.go:117] "RemoveContainer" containerID="79a23cd987d25b4ba8cf7db51491fd07edd567d46ae3155b711c83e467100ac1" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.061468 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.080839 4813 scope.go:117] "RemoveContainer" containerID="71ca0ba88773075ce09fc90a6d8abe5dd8b1326d84e508c7237cae5cd523c29f" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.097549 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.107906 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.108161 4813 scope.go:117] "RemoveContainer" containerID="ea865e6a7b4ee72ac3fb32659199a2976aeb564f13280b17fe93a40c7a64e023" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.140802 4813 scope.go:117] "RemoveContainer" containerID="840ec26f06b6c84b7060d8117eb1e876ea0a2344ae80e2348607e7a639b47dba" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.144491 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:54 crc kubenswrapper[4813]: E0217 09:10:54.144968 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="ceilometer-notification-agent" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.144989 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="ceilometer-notification-agent" Feb 17 09:10:54 crc kubenswrapper[4813]: E0217 09:10:54.145013 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="proxy-httpd" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.145023 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="proxy-httpd" Feb 17 09:10:54 crc kubenswrapper[4813]: E0217 09:10:54.145034 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="ceilometer-central-agent" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.145042 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="ceilometer-central-agent" Feb 17 09:10:54 crc kubenswrapper[4813]: E0217 09:10:54.145053 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="sg-core" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.145060 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="sg-core" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.145230 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="proxy-httpd" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.145251 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="ceilometer-notification-agent" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.145264 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="sg-core" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.145282 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" containerName="ceilometer-central-agent" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.147043 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.151087 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.151218 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.151434 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.152023 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.280219 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-scripts\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.281275 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/469b9424-2e6d-48e0-abc6-0076b618d2a3-log-httpd\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.281448 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq5cj\" (UniqueName: \"kubernetes.io/projected/469b9424-2e6d-48e0-abc6-0076b618d2a3-kube-api-access-qq5cj\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.281513 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.281786 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-config-data\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.281837 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.281860 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/469b9424-2e6d-48e0-abc6-0076b618d2a3-run-httpd\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.281934 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.382726 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.382789 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-scripts\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.382820 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/469b9424-2e6d-48e0-abc6-0076b618d2a3-log-httpd\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.382841 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq5cj\" (UniqueName: \"kubernetes.io/projected/469b9424-2e6d-48e0-abc6-0076b618d2a3-kube-api-access-qq5cj\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.382859 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.382911 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-config-data\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.382930 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.382945 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/469b9424-2e6d-48e0-abc6-0076b618d2a3-run-httpd\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.383971 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/469b9424-2e6d-48e0-abc6-0076b618d2a3-log-httpd\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.384070 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/469b9424-2e6d-48e0-abc6-0076b618d2a3-run-httpd\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.387283 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.387582 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.388148 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-scripts\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.388665 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.404350 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-config-data\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.407097 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq5cj\" (UniqueName: \"kubernetes.io/projected/469b9424-2e6d-48e0-abc6-0076b618d2a3-kube-api-access-qq5cj\") pod \"ceilometer-0\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:54 crc kubenswrapper[4813]: I0217 09:10:54.464067 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:55 crc kubenswrapper[4813]: W0217 09:10:55.015039 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod469b9424_2e6d_48e0_abc6_0076b618d2a3.slice/crio-48c605ba23de02dbebdb5fd63a7e8644f9c0669a46662d3f33153b7b5f15726f WatchSource:0}: Error finding container 48c605ba23de02dbebdb5fd63a7e8644f9c0669a46662d3f33153b7b5f15726f: Status 404 returned error can't find the container with id 48c605ba23de02dbebdb5fd63a7e8644f9c0669a46662d3f33153b7b5f15726f Feb 17 09:10:55 crc kubenswrapper[4813]: I0217 09:10:55.026237 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:10:55 crc kubenswrapper[4813]: I0217 09:10:55.070465 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"469b9424-2e6d-48e0-abc6-0076b618d2a3","Type":"ContainerStarted","Data":"48c605ba23de02dbebdb5fd63a7e8644f9c0669a46662d3f33153b7b5f15726f"} Feb 17 09:10:55 crc kubenswrapper[4813]: I0217 09:10:55.130882 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ca13cf5-6e58-428e-a1a1-0d79bb38b43e" path="/var/lib/kubelet/pods/3ca13cf5-6e58-428e-a1a1-0d79bb38b43e/volumes" Feb 17 09:10:56 crc kubenswrapper[4813]: I0217 09:10:56.082385 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"469b9424-2e6d-48e0-abc6-0076b618d2a3","Type":"ContainerStarted","Data":"dbcb53d894e2a62e732958074c759dfd6846491e4de289fbe8e7dc76ed4a5107"} Feb 17 09:10:57 crc kubenswrapper[4813]: I0217 09:10:57.092560 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"469b9424-2e6d-48e0-abc6-0076b618d2a3","Type":"ContainerStarted","Data":"a910727c70d5d91c005776d5ac2b7d687f862ddcb45eaa0b4b269f1b1833b312"} Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.103235 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"469b9424-2e6d-48e0-abc6-0076b618d2a3","Type":"ContainerStarted","Data":"e9285b2fec3da78a37eeffe53816c6ec25995183de2de289640553c87609badd"} Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.752839 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.754464 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.770598 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.875294 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.875560 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.875626 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-logs\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.875729 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.875806 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.875922 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtjkj\" (UniqueName: \"kubernetes.io/projected/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-kube-api-access-rtjkj\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.976914 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.976967 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.977025 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtjkj\" (UniqueName: \"kubernetes.io/projected/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-kube-api-access-rtjkj\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.977075 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.977104 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.977129 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-logs\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.977638 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-logs\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.981585 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.982383 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.983535 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.984212 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:58 crc kubenswrapper[4813]: I0217 09:10:58.998026 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtjkj\" (UniqueName: \"kubernetes.io/projected/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-kube-api-access-rtjkj\") pod \"watcher-kuttl-api-2\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:59 crc kubenswrapper[4813]: I0217 09:10:59.090088 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:10:59 crc kubenswrapper[4813]: I0217 09:10:59.123400 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"469b9424-2e6d-48e0-abc6-0076b618d2a3","Type":"ContainerStarted","Data":"8a7e36fcbe19f58f7349eb3ba902b0b1438f2f64b477e1f49ce0e1191251e4cb"} Feb 17 09:10:59 crc kubenswrapper[4813]: I0217 09:10:59.123528 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:10:59 crc kubenswrapper[4813]: I0217 09:10:59.150907 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.6425655369999999 podStartE2EDuration="5.150885112s" podCreationTimestamp="2026-02-17 09:10:54 +0000 UTC" firstStartedPulling="2026-02-17 09:10:55.017827341 +0000 UTC m=+1802.678588564" lastFinishedPulling="2026-02-17 09:10:58.526146916 +0000 UTC m=+1806.186908139" observedRunningTime="2026-02-17 09:10:59.144882781 +0000 UTC m=+1806.805644014" watchObservedRunningTime="2026-02-17 09:10:59.150885112 +0000 UTC m=+1806.811646335" Feb 17 09:10:59 crc kubenswrapper[4813]: I0217 09:10:59.594390 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Feb 17 09:11:00 crc kubenswrapper[4813]: I0217 09:11:00.136814 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0","Type":"ContainerStarted","Data":"7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5"} Feb 17 09:11:00 crc kubenswrapper[4813]: I0217 09:11:00.137187 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0","Type":"ContainerStarted","Data":"19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc"} Feb 17 09:11:00 crc kubenswrapper[4813]: I0217 09:11:00.137204 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0","Type":"ContainerStarted","Data":"addac9f9113da5927822ee7aa8289397567928110451c923fbc4879b47847a93"} Feb 17 09:11:01 crc kubenswrapper[4813]: I0217 09:11:01.144259 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:11:01 crc kubenswrapper[4813]: I0217 09:11:01.676054 4813 scope.go:117] "RemoveContainer" containerID="b3df3c95d0e6625b92ac626e93daf0905c2dab15eda9c9f9cdd9a39db3099dc2" Feb 17 09:11:03 crc kubenswrapper[4813]: I0217 09:11:03.285919 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:11:03 crc kubenswrapper[4813]: I0217 09:11:03.315197 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-2" podStartSLOduration=5.315171402 podStartE2EDuration="5.315171402s" podCreationTimestamp="2026-02-17 09:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 09:11:00.173238195 +0000 UTC m=+1807.833999458" watchObservedRunningTime="2026-02-17 09:11:03.315171402 +0000 UTC m=+1810.975932625" Feb 17 09:11:04 crc kubenswrapper[4813]: I0217 09:11:04.090963 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:11:04 crc kubenswrapper[4813]: I0217 09:11:04.112553 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:11:04 crc kubenswrapper[4813]: E0217 09:11:04.112810 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:11:09 crc kubenswrapper[4813]: I0217 09:11:09.090447 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:11:09 crc kubenswrapper[4813]: I0217 09:11:09.098808 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:11:09 crc kubenswrapper[4813]: I0217 09:11:09.230459 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:11:10 crc kubenswrapper[4813]: I0217 09:11:10.322664 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Feb 17 09:11:10 crc kubenswrapper[4813]: I0217 09:11:10.332999 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 17 09:11:10 crc kubenswrapper[4813]: I0217 09:11:10.333273 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="15c14ce6-cc09-4b5d-8b16-8c831809e097" containerName="watcher-kuttl-api-log" containerID="cri-o://c03871266e4a32a1009bdf365eca13d4791746c307a65c4a8aa14fc2e4d2d614" gracePeriod=30 Feb 17 09:11:10 crc kubenswrapper[4813]: I0217 09:11:10.333723 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="15c14ce6-cc09-4b5d-8b16-8c831809e097" containerName="watcher-api" containerID="cri-o://3cef7a096374e68894fcbfda9d1ad13de4be936e3fe232da5e40a4cfc32cf219" gracePeriod=30 Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.241466 4813 generic.go:334] "Generic (PLEG): container finished" podID="15c14ce6-cc09-4b5d-8b16-8c831809e097" containerID="3cef7a096374e68894fcbfda9d1ad13de4be936e3fe232da5e40a4cfc32cf219" exitCode=0 Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.241837 4813 generic.go:334] "Generic (PLEG): container finished" podID="15c14ce6-cc09-4b5d-8b16-8c831809e097" containerID="c03871266e4a32a1009bdf365eca13d4791746c307a65c4a8aa14fc2e4d2d614" exitCode=143 Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.242053 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" containerName="watcher-kuttl-api-log" containerID="cri-o://19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc" gracePeriod=30 Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.242138 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"15c14ce6-cc09-4b5d-8b16-8c831809e097","Type":"ContainerDied","Data":"3cef7a096374e68894fcbfda9d1ad13de4be936e3fe232da5e40a4cfc32cf219"} Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.242175 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"15c14ce6-cc09-4b5d-8b16-8c831809e097","Type":"ContainerDied","Data":"c03871266e4a32a1009bdf365eca13d4791746c307a65c4a8aa14fc2e4d2d614"} Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.242190 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"15c14ce6-cc09-4b5d-8b16-8c831809e097","Type":"ContainerDied","Data":"b882fb05b8fd52a07285883c7fe006db1ac185a8852da0d9d71d00079205000c"} Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.242204 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b882fb05b8fd52a07285883c7fe006db1ac185a8852da0d9d71d00079205000c" Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.242613 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" containerName="watcher-api" containerID="cri-o://7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5" gracePeriod=30 Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.277036 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.349978 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15c14ce6-cc09-4b5d-8b16-8c831809e097-logs\") pod \"15c14ce6-cc09-4b5d-8b16-8c831809e097\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.350072 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfztc\" (UniqueName: \"kubernetes.io/projected/15c14ce6-cc09-4b5d-8b16-8c831809e097-kube-api-access-mfztc\") pod \"15c14ce6-cc09-4b5d-8b16-8c831809e097\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.350188 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-cert-memcached-mtls\") pod \"15c14ce6-cc09-4b5d-8b16-8c831809e097\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.350253 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-custom-prometheus-ca\") pod \"15c14ce6-cc09-4b5d-8b16-8c831809e097\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.350283 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-combined-ca-bundle\") pod \"15c14ce6-cc09-4b5d-8b16-8c831809e097\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.350362 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-config-data\") pod \"15c14ce6-cc09-4b5d-8b16-8c831809e097\" (UID: \"15c14ce6-cc09-4b5d-8b16-8c831809e097\") " Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.357461 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15c14ce6-cc09-4b5d-8b16-8c831809e097-logs" (OuterVolumeSpecName: "logs") pod "15c14ce6-cc09-4b5d-8b16-8c831809e097" (UID: "15c14ce6-cc09-4b5d-8b16-8c831809e097"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.358232 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15c14ce6-cc09-4b5d-8b16-8c831809e097-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.360561 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c14ce6-cc09-4b5d-8b16-8c831809e097-kube-api-access-mfztc" (OuterVolumeSpecName: "kube-api-access-mfztc") pod "15c14ce6-cc09-4b5d-8b16-8c831809e097" (UID: "15c14ce6-cc09-4b5d-8b16-8c831809e097"). InnerVolumeSpecName "kube-api-access-mfztc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.384644 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "15c14ce6-cc09-4b5d-8b16-8c831809e097" (UID: "15c14ce6-cc09-4b5d-8b16-8c831809e097"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.391959 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15c14ce6-cc09-4b5d-8b16-8c831809e097" (UID: "15c14ce6-cc09-4b5d-8b16-8c831809e097"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.414012 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-config-data" (OuterVolumeSpecName: "config-data") pod "15c14ce6-cc09-4b5d-8b16-8c831809e097" (UID: "15c14ce6-cc09-4b5d-8b16-8c831809e097"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.445007 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "15c14ce6-cc09-4b5d-8b16-8c831809e097" (UID: "15c14ce6-cc09-4b5d-8b16-8c831809e097"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.460279 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.460333 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.460347 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.460360 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15c14ce6-cc09-4b5d-8b16-8c831809e097-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:11 crc kubenswrapper[4813]: I0217 09:11:11.460371 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfztc\" (UniqueName: \"kubernetes.io/projected/15c14ce6-cc09-4b5d-8b16-8c831809e097-kube-api-access-mfztc\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.093763 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.171202 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-cert-memcached-mtls\") pod \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.171264 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtjkj\" (UniqueName: \"kubernetes.io/projected/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-kube-api-access-rtjkj\") pod \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.171293 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-config-data\") pod \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.171467 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-combined-ca-bundle\") pod \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.171508 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-custom-prometheus-ca\") pod \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.171534 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-logs\") pod \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\" (UID: \"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0\") " Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.172565 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-logs" (OuterVolumeSpecName: "logs") pod "9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" (UID: "9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.182606 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-kube-api-access-rtjkj" (OuterVolumeSpecName: "kube-api-access-rtjkj") pod "9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" (UID: "9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0"). InnerVolumeSpecName "kube-api-access-rtjkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.192825 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" (UID: "9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.196929 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" (UID: "9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.226288 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-config-data" (OuterVolumeSpecName: "config-data") pod "9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" (UID: "9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.227930 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" (UID: "9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.249941 4813 generic.go:334] "Generic (PLEG): container finished" podID="9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" containerID="7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5" exitCode=0 Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.249974 4813 generic.go:334] "Generic (PLEG): container finished" podID="9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" containerID="19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc" exitCode=143 Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.250033 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.250383 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0","Type":"ContainerDied","Data":"7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5"} Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.250482 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0","Type":"ContainerDied","Data":"19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc"} Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.250555 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0","Type":"ContainerDied","Data":"addac9f9113da5927822ee7aa8289397567928110451c923fbc4879b47847a93"} Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.250567 4813 scope.go:117] "RemoveContainer" containerID="7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.250925 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.272974 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.273002 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.273015 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.273027 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.273039 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtjkj\" (UniqueName: \"kubernetes.io/projected/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-kube-api-access-rtjkj\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.273051 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.296112 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.297515 4813 scope.go:117] "RemoveContainer" containerID="19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.311394 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.320802 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.325555 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.339665 4813 scope.go:117] "RemoveContainer" containerID="7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5" Feb 17 09:11:12 crc kubenswrapper[4813]: E0217 09:11:12.340421 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5\": container with ID starting with 7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5 not found: ID does not exist" containerID="7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.340469 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5"} err="failed to get container status \"7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5\": rpc error: code = NotFound desc = could not find container \"7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5\": container with ID starting with 7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5 not found: ID does not exist" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.340496 4813 scope.go:117] "RemoveContainer" containerID="19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc" Feb 17 09:11:12 crc kubenswrapper[4813]: E0217 09:11:12.340912 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc\": container with ID starting with 19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc not found: ID does not exist" containerID="19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.340951 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc"} err="failed to get container status \"19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc\": rpc error: code = NotFound desc = could not find container \"19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc\": container with ID starting with 19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc not found: ID does not exist" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.340977 4813 scope.go:117] "RemoveContainer" containerID="7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.341483 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5"} err="failed to get container status \"7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5\": rpc error: code = NotFound desc = could not find container \"7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5\": container with ID starting with 7535ad85d2c54d5ea22fcbad10e8a7010d795c6e00b0c55f282deb01c1fdd9b5 not found: ID does not exist" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.341523 4813 scope.go:117] "RemoveContainer" containerID="19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.341834 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc"} err="failed to get container status \"19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc\": rpc error: code = NotFound desc = could not find container \"19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc\": container with ID starting with 19c8a4ffb87526172096b2d55fc6cac32031656ef4b438d43e96b9cd62f674bc not found: ID does not exist" Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.590004 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.590678 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f01fe088-dd12-4772-93a5-03e4c0cee445" containerName="watcher-api" containerID="cri-o://4706e9317130c5b9d1dba9c7dcb0dd4612a41b59b42108cb509cd26ecd8d8ce2" gracePeriod=30 Feb 17 09:11:12 crc kubenswrapper[4813]: I0217 09:11:12.590292 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f01fe088-dd12-4772-93a5-03e4c0cee445" containerName="watcher-kuttl-api-log" containerID="cri-o://e895350aa8b6859369d733ebaf5e899a96e527cf731a88f17ae36ef54eeacc35" gracePeriod=30 Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.119929 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15c14ce6-cc09-4b5d-8b16-8c831809e097" path="/var/lib/kubelet/pods/15c14ce6-cc09-4b5d-8b16-8c831809e097/volumes" Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.121032 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" path="/var/lib/kubelet/pods/9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0/volumes" Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.258793 4813 generic.go:334] "Generic (PLEG): container finished" podID="f01fe088-dd12-4772-93a5-03e4c0cee445" containerID="4706e9317130c5b9d1dba9c7dcb0dd4612a41b59b42108cb509cd26ecd8d8ce2" exitCode=0 Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.259015 4813 generic.go:334] "Generic (PLEG): container finished" podID="f01fe088-dd12-4772-93a5-03e4c0cee445" containerID="e895350aa8b6859369d733ebaf5e899a96e527cf731a88f17ae36ef54eeacc35" exitCode=143 Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.259036 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f01fe088-dd12-4772-93a5-03e4c0cee445","Type":"ContainerDied","Data":"4706e9317130c5b9d1dba9c7dcb0dd4612a41b59b42108cb509cd26ecd8d8ce2"} Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.259182 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f01fe088-dd12-4772-93a5-03e4c0cee445","Type":"ContainerDied","Data":"e895350aa8b6859369d733ebaf5e899a96e527cf731a88f17ae36ef54eeacc35"} Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.829455 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k"] Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.850261 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-4tt6k"] Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.868969 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.869177 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="d97c34c8-66f9-406a-b988-14449fcc40b0" containerName="watcher-applier" containerID="cri-o://75cc0ae3fe13e81663970579bc25f71b8594bcf2fce98fabd410109b65fd1a04" gracePeriod=30 Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.929513 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher95cc-account-delete-bld2m"] Feb 17 09:11:13 crc kubenswrapper[4813]: E0217 09:11:13.929842 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" containerName="watcher-kuttl-api-log" Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.929854 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" containerName="watcher-kuttl-api-log" Feb 17 09:11:13 crc kubenswrapper[4813]: E0217 09:11:13.929873 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c14ce6-cc09-4b5d-8b16-8c831809e097" containerName="watcher-kuttl-api-log" Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.929879 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c14ce6-cc09-4b5d-8b16-8c831809e097" containerName="watcher-kuttl-api-log" Feb 17 09:11:13 crc kubenswrapper[4813]: E0217 09:11:13.929895 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c14ce6-cc09-4b5d-8b16-8c831809e097" containerName="watcher-api" Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.929902 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c14ce6-cc09-4b5d-8b16-8c831809e097" containerName="watcher-api" Feb 17 09:11:13 crc kubenswrapper[4813]: E0217 09:11:13.929912 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" containerName="watcher-api" Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.929918 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" containerName="watcher-api" Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.930068 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" containerName="watcher-api" Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.930083 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b467aa0-d8b4-4bed-8a18-e572fa8ba2f0" containerName="watcher-kuttl-api-log" Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.930092 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c14ce6-cc09-4b5d-8b16-8c831809e097" containerName="watcher-kuttl-api-log" Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.930104 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c14ce6-cc09-4b5d-8b16-8c831809e097" containerName="watcher-api" Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.930655 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher95cc-account-delete-bld2m" Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.935736 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher95cc-account-delete-bld2m"] Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.985639 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:11:13 crc kubenswrapper[4813]: I0217 09:11:13.986413 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="9ae336ae-5af7-44f0-aa57-e1ab5da43770" containerName="watcher-decision-engine" containerID="cri-o://a00c3d6ed08e7ae45f28d7867973a21bb17ff5b6b51617afdc73f41a3b92de6a" gracePeriod=30 Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.006572 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkch5\" (UniqueName: \"kubernetes.io/projected/bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d-kube-api-access-lkch5\") pod \"watcher95cc-account-delete-bld2m\" (UID: \"bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d\") " pod="watcher-kuttl-default/watcher95cc-account-delete-bld2m" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.006651 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d-operator-scripts\") pod \"watcher95cc-account-delete-bld2m\" (UID: \"bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d\") " pod="watcher-kuttl-default/watcher95cc-account-delete-bld2m" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.093259 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.108142 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d-operator-scripts\") pod \"watcher95cc-account-delete-bld2m\" (UID: \"bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d\") " pod="watcher-kuttl-default/watcher95cc-account-delete-bld2m" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.108264 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkch5\" (UniqueName: \"kubernetes.io/projected/bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d-kube-api-access-lkch5\") pod \"watcher95cc-account-delete-bld2m\" (UID: \"bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d\") " pod="watcher-kuttl-default/watcher95cc-account-delete-bld2m" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.109154 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d-operator-scripts\") pod \"watcher95cc-account-delete-bld2m\" (UID: \"bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d\") " pod="watcher-kuttl-default/watcher95cc-account-delete-bld2m" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.152328 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkch5\" (UniqueName: \"kubernetes.io/projected/bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d-kube-api-access-lkch5\") pod \"watcher95cc-account-delete-bld2m\" (UID: \"bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d\") " pod="watcher-kuttl-default/watcher95cc-account-delete-bld2m" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.209202 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-custom-prometheus-ca\") pod \"f01fe088-dd12-4772-93a5-03e4c0cee445\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.209698 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f01fe088-dd12-4772-93a5-03e4c0cee445-logs\") pod \"f01fe088-dd12-4772-93a5-03e4c0cee445\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.209782 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-config-data\") pod \"f01fe088-dd12-4772-93a5-03e4c0cee445\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.209858 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-combined-ca-bundle\") pod \"f01fe088-dd12-4772-93a5-03e4c0cee445\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.209978 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-cert-memcached-mtls\") pod \"f01fe088-dd12-4772-93a5-03e4c0cee445\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.210067 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg8qm\" (UniqueName: \"kubernetes.io/projected/f01fe088-dd12-4772-93a5-03e4c0cee445-kube-api-access-fg8qm\") pod \"f01fe088-dd12-4772-93a5-03e4c0cee445\" (UID: \"f01fe088-dd12-4772-93a5-03e4c0cee445\") " Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.211335 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f01fe088-dd12-4772-93a5-03e4c0cee445-logs" (OuterVolumeSpecName: "logs") pod "f01fe088-dd12-4772-93a5-03e4c0cee445" (UID: "f01fe088-dd12-4772-93a5-03e4c0cee445"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.248633 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f01fe088-dd12-4772-93a5-03e4c0cee445-kube-api-access-fg8qm" (OuterVolumeSpecName: "kube-api-access-fg8qm") pod "f01fe088-dd12-4772-93a5-03e4c0cee445" (UID: "f01fe088-dd12-4772-93a5-03e4c0cee445"). InnerVolumeSpecName "kube-api-access-fg8qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.252493 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f01fe088-dd12-4772-93a5-03e4c0cee445" (UID: "f01fe088-dd12-4772-93a5-03e4c0cee445"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.264180 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher95cc-account-delete-bld2m" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.282446 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f01fe088-dd12-4772-93a5-03e4c0cee445" (UID: "f01fe088-dd12-4772-93a5-03e4c0cee445"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.298078 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f01fe088-dd12-4772-93a5-03e4c0cee445","Type":"ContainerDied","Data":"c6d13ceb806558c941d6653330579a3b13341cf4866c42a9069452b57d3425df"} Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.298127 4813 scope.go:117] "RemoveContainer" containerID="4706e9317130c5b9d1dba9c7dcb0dd4612a41b59b42108cb509cd26ecd8d8ce2" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.298290 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.303037 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-config-data" (OuterVolumeSpecName: "config-data") pod "f01fe088-dd12-4772-93a5-03e4c0cee445" (UID: "f01fe088-dd12-4772-93a5-03e4c0cee445"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.313678 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f01fe088-dd12-4772-93a5-03e4c0cee445-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.313715 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.313724 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.313926 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg8qm\" (UniqueName: \"kubernetes.io/projected/f01fe088-dd12-4772-93a5-03e4c0cee445-kube-api-access-fg8qm\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.313939 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.381974 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "f01fe088-dd12-4772-93a5-03e4c0cee445" (UID: "f01fe088-dd12-4772-93a5-03e4c0cee445"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.386558 4813 scope.go:117] "RemoveContainer" containerID="e895350aa8b6859369d733ebaf5e899a96e527cf731a88f17ae36ef54eeacc35" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.415956 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f01fe088-dd12-4772-93a5-03e4c0cee445-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.645356 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.656644 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Feb 17 09:11:14 crc kubenswrapper[4813]: I0217 09:11:14.760502 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher95cc-account-delete-bld2m"] Feb 17 09:11:14 crc kubenswrapper[4813]: W0217 09:11:14.762378 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf5d2a37_e9b1_4754_8db7_3bd18cb24d8d.slice/crio-901bac20ff6e601e0328acb6518c4828bc642d240b21512aedbf2827704c54f1 WatchSource:0}: Error finding container 901bac20ff6e601e0328acb6518c4828bc642d240b21512aedbf2827704c54f1: Status 404 returned error can't find the container with id 901bac20ff6e601e0328acb6518c4828bc642d240b21512aedbf2827704c54f1 Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.126852 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f01fe088-dd12-4772-93a5-03e4c0cee445" path="/var/lib/kubelet/pods/f01fe088-dd12-4772-93a5-03e4c0cee445/volumes" Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.127874 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc14c901-a140-4930-a64a-074ffd207f24" path="/var/lib/kubelet/pods/fc14c901-a140-4930-a64a-074ffd207f24/volumes" Feb 17 09:11:15 crc kubenswrapper[4813]: E0217 09:11:15.268283 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf5d2a37_e9b1_4754_8db7_3bd18cb24d8d.slice/crio-conmon-9424e398cdf54a3927e89fd28e89ac0a89df5e476bcc8675d3326c4502ac6fd9.scope\": RecentStats: unable to find data in memory cache]" Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.311248 4813 generic.go:334] "Generic (PLEG): container finished" podID="bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d" containerID="9424e398cdf54a3927e89fd28e89ac0a89df5e476bcc8675d3326c4502ac6fd9" exitCode=0 Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.311350 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher95cc-account-delete-bld2m" event={"ID":"bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d","Type":"ContainerDied","Data":"9424e398cdf54a3927e89fd28e89ac0a89df5e476bcc8675d3326c4502ac6fd9"} Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.311400 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher95cc-account-delete-bld2m" event={"ID":"bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d","Type":"ContainerStarted","Data":"901bac20ff6e601e0328acb6518c4828bc642d240b21512aedbf2827704c54f1"} Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.316771 4813 generic.go:334] "Generic (PLEG): container finished" podID="d97c34c8-66f9-406a-b988-14449fcc40b0" containerID="75cc0ae3fe13e81663970579bc25f71b8594bcf2fce98fabd410109b65fd1a04" exitCode=0 Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.316810 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d97c34c8-66f9-406a-b988-14449fcc40b0","Type":"ContainerDied","Data":"75cc0ae3fe13e81663970579bc25f71b8594bcf2fce98fabd410109b65fd1a04"} Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.635873 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.735713 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d97c34c8-66f9-406a-b988-14449fcc40b0-logs\") pod \"d97c34c8-66f9-406a-b988-14449fcc40b0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.735781 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-combined-ca-bundle\") pod \"d97c34c8-66f9-406a-b988-14449fcc40b0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.735804 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-cert-memcached-mtls\") pod \"d97c34c8-66f9-406a-b988-14449fcc40b0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.735948 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88qxc\" (UniqueName: \"kubernetes.io/projected/d97c34c8-66f9-406a-b988-14449fcc40b0-kube-api-access-88qxc\") pod \"d97c34c8-66f9-406a-b988-14449fcc40b0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.735973 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-config-data\") pod \"d97c34c8-66f9-406a-b988-14449fcc40b0\" (UID: \"d97c34c8-66f9-406a-b988-14449fcc40b0\") " Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.736298 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d97c34c8-66f9-406a-b988-14449fcc40b0-logs" (OuterVolumeSpecName: "logs") pod "d97c34c8-66f9-406a-b988-14449fcc40b0" (UID: "d97c34c8-66f9-406a-b988-14449fcc40b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.742514 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97c34c8-66f9-406a-b988-14449fcc40b0-kube-api-access-88qxc" (OuterVolumeSpecName: "kube-api-access-88qxc") pod "d97c34c8-66f9-406a-b988-14449fcc40b0" (UID: "d97c34c8-66f9-406a-b988-14449fcc40b0"). InnerVolumeSpecName "kube-api-access-88qxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.782126 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d97c34c8-66f9-406a-b988-14449fcc40b0" (UID: "d97c34c8-66f9-406a-b988-14449fcc40b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.793246 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-config-data" (OuterVolumeSpecName: "config-data") pod "d97c34c8-66f9-406a-b988-14449fcc40b0" (UID: "d97c34c8-66f9-406a-b988-14449fcc40b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.828461 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "d97c34c8-66f9-406a-b988-14449fcc40b0" (UID: "d97c34c8-66f9-406a-b988-14449fcc40b0"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.837623 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88qxc\" (UniqueName: \"kubernetes.io/projected/d97c34c8-66f9-406a-b988-14449fcc40b0-kube-api-access-88qxc\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.837661 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.837674 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d97c34c8-66f9-406a-b988-14449fcc40b0-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.837683 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:15 crc kubenswrapper[4813]: I0217 09:11:15.837692 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d97c34c8-66f9-406a-b988-14449fcc40b0-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.264367 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.264939 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="ceilometer-central-agent" containerID="cri-o://dbcb53d894e2a62e732958074c759dfd6846491e4de289fbe8e7dc76ed4a5107" gracePeriod=30 Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.265013 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="ceilometer-notification-agent" containerID="cri-o://a910727c70d5d91c005776d5ac2b7d687f862ddcb45eaa0b4b269f1b1833b312" gracePeriod=30 Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.265015 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="sg-core" containerID="cri-o://e9285b2fec3da78a37eeffe53816c6ec25995183de2de289640553c87609badd" gracePeriod=30 Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.265125 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="proxy-httpd" containerID="cri-o://8a7e36fcbe19f58f7349eb3ba902b0b1438f2f64b477e1f49ce0e1191251e4cb" gracePeriod=30 Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.326481 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d97c34c8-66f9-406a-b988-14449fcc40b0","Type":"ContainerDied","Data":"b27010c21a5e125451336558e84a8135ae84cf6b067c39ed8525145b89272284"} Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.326594 4813 scope.go:117] "RemoveContainer" containerID="75cc0ae3fe13e81663970579bc25f71b8594bcf2fce98fabd410109b65fd1a04" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.326725 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.361742 4813 generic.go:334] "Generic (PLEG): container finished" podID="9ae336ae-5af7-44f0-aa57-e1ab5da43770" containerID="a00c3d6ed08e7ae45f28d7867973a21bb17ff5b6b51617afdc73f41a3b92de6a" exitCode=0 Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.361944 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9ae336ae-5af7-44f0-aa57-e1ab5da43770","Type":"ContainerDied","Data":"a00c3d6ed08e7ae45f28d7867973a21bb17ff5b6b51617afdc73f41a3b92de6a"} Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.402385 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.404718 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.645826 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.250:3000/\": read tcp 10.217.0.2:34050->10.217.0.250:3000: read: connection reset by peer" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.803554 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher95cc-account-delete-bld2m" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.819761 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.856898 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d-operator-scripts\") pod \"bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d\" (UID: \"bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d\") " Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.856966 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-custom-prometheus-ca\") pod \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.857040 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkch5\" (UniqueName: \"kubernetes.io/projected/bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d-kube-api-access-lkch5\") pod \"bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d\" (UID: \"bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d\") " Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.857081 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-combined-ca-bundle\") pod \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.857146 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ae336ae-5af7-44f0-aa57-e1ab5da43770-logs\") pod \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.857175 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-config-data\") pod \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.857269 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-cert-memcached-mtls\") pod \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.857302 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbxtw\" (UniqueName: \"kubernetes.io/projected/9ae336ae-5af7-44f0-aa57-e1ab5da43770-kube-api-access-bbxtw\") pod \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\" (UID: \"9ae336ae-5af7-44f0-aa57-e1ab5da43770\") " Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.858794 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d" (UID: "bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.861692 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae336ae-5af7-44f0-aa57-e1ab5da43770-logs" (OuterVolumeSpecName: "logs") pod "9ae336ae-5af7-44f0-aa57-e1ab5da43770" (UID: "9ae336ae-5af7-44f0-aa57-e1ab5da43770"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.862342 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae336ae-5af7-44f0-aa57-e1ab5da43770-kube-api-access-bbxtw" (OuterVolumeSpecName: "kube-api-access-bbxtw") pod "9ae336ae-5af7-44f0-aa57-e1ab5da43770" (UID: "9ae336ae-5af7-44f0-aa57-e1ab5da43770"). InnerVolumeSpecName "kube-api-access-bbxtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.863749 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d-kube-api-access-lkch5" (OuterVolumeSpecName: "kube-api-access-lkch5") pod "bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d" (UID: "bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d"). InnerVolumeSpecName "kube-api-access-lkch5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.907038 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ae336ae-5af7-44f0-aa57-e1ab5da43770" (UID: "9ae336ae-5af7-44f0-aa57-e1ab5da43770"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.913434 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "9ae336ae-5af7-44f0-aa57-e1ab5da43770" (UID: "9ae336ae-5af7-44f0-aa57-e1ab5da43770"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.927804 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-config-data" (OuterVolumeSpecName: "config-data") pod "9ae336ae-5af7-44f0-aa57-e1ab5da43770" (UID: "9ae336ae-5af7-44f0-aa57-e1ab5da43770"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.941614 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "9ae336ae-5af7-44f0-aa57-e1ab5da43770" (UID: "9ae336ae-5af7-44f0-aa57-e1ab5da43770"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.959166 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkch5\" (UniqueName: \"kubernetes.io/projected/bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d-kube-api-access-lkch5\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.959203 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.959222 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ae336ae-5af7-44f0-aa57-e1ab5da43770-logs\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.959232 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.959242 4813 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.959250 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbxtw\" (UniqueName: \"kubernetes.io/projected/9ae336ae-5af7-44f0-aa57-e1ab5da43770-kube-api-access-bbxtw\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.959259 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:16 crc kubenswrapper[4813]: I0217 09:11:16.959279 4813 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9ae336ae-5af7-44f0-aa57-e1ab5da43770-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.119716 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d97c34c8-66f9-406a-b988-14449fcc40b0" path="/var/lib/kubelet/pods/d97c34c8-66f9-406a-b988-14449fcc40b0/volumes" Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.371129 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher95cc-account-delete-bld2m" event={"ID":"bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d","Type":"ContainerDied","Data":"901bac20ff6e601e0328acb6518c4828bc642d240b21512aedbf2827704c54f1"} Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.371174 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="901bac20ff6e601e0328acb6518c4828bc642d240b21512aedbf2827704c54f1" Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.372224 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher95cc-account-delete-bld2m" Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.373031 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.374239 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"9ae336ae-5af7-44f0-aa57-e1ab5da43770","Type":"ContainerDied","Data":"024f429e92b57dddff910cec52f49ec6abf24245342601f762c41197ddd0cfe5"} Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.374297 4813 scope.go:117] "RemoveContainer" containerID="a00c3d6ed08e7ae45f28d7867973a21bb17ff5b6b51617afdc73f41a3b92de6a" Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.395782 4813 generic.go:334] "Generic (PLEG): container finished" podID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerID="8a7e36fcbe19f58f7349eb3ba902b0b1438f2f64b477e1f49ce0e1191251e4cb" exitCode=0 Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.395804 4813 generic.go:334] "Generic (PLEG): container finished" podID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerID="e9285b2fec3da78a37eeffe53816c6ec25995183de2de289640553c87609badd" exitCode=2 Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.395810 4813 generic.go:334] "Generic (PLEG): container finished" podID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerID="dbcb53d894e2a62e732958074c759dfd6846491e4de289fbe8e7dc76ed4a5107" exitCode=0 Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.395851 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"469b9424-2e6d-48e0-abc6-0076b618d2a3","Type":"ContainerDied","Data":"8a7e36fcbe19f58f7349eb3ba902b0b1438f2f64b477e1f49ce0e1191251e4cb"} Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.395875 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"469b9424-2e6d-48e0-abc6-0076b618d2a3","Type":"ContainerDied","Data":"e9285b2fec3da78a37eeffe53816c6ec25995183de2de289640553c87609badd"} Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.395886 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"469b9424-2e6d-48e0-abc6-0076b618d2a3","Type":"ContainerDied","Data":"dbcb53d894e2a62e732958074c759dfd6846491e4de289fbe8e7dc76ed4a5107"} Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.407045 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.413941 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.876666 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.975716 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/469b9424-2e6d-48e0-abc6-0076b618d2a3-run-httpd\") pod \"469b9424-2e6d-48e0-abc6-0076b618d2a3\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.975780 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-ceilometer-tls-certs\") pod \"469b9424-2e6d-48e0-abc6-0076b618d2a3\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.975822 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq5cj\" (UniqueName: \"kubernetes.io/projected/469b9424-2e6d-48e0-abc6-0076b618d2a3-kube-api-access-qq5cj\") pod \"469b9424-2e6d-48e0-abc6-0076b618d2a3\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.975898 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/469b9424-2e6d-48e0-abc6-0076b618d2a3-log-httpd\") pod \"469b9424-2e6d-48e0-abc6-0076b618d2a3\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.976006 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-sg-core-conf-yaml\") pod \"469b9424-2e6d-48e0-abc6-0076b618d2a3\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.976036 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-combined-ca-bundle\") pod \"469b9424-2e6d-48e0-abc6-0076b618d2a3\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.976038 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/469b9424-2e6d-48e0-abc6-0076b618d2a3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "469b9424-2e6d-48e0-abc6-0076b618d2a3" (UID: "469b9424-2e6d-48e0-abc6-0076b618d2a3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.976070 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-scripts\") pod \"469b9424-2e6d-48e0-abc6-0076b618d2a3\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.976115 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-config-data\") pod \"469b9424-2e6d-48e0-abc6-0076b618d2a3\" (UID: \"469b9424-2e6d-48e0-abc6-0076b618d2a3\") " Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.976545 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/469b9424-2e6d-48e0-abc6-0076b618d2a3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.979939 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/469b9424-2e6d-48e0-abc6-0076b618d2a3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "469b9424-2e6d-48e0-abc6-0076b618d2a3" (UID: "469b9424-2e6d-48e0-abc6-0076b618d2a3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.982645 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/469b9424-2e6d-48e0-abc6-0076b618d2a3-kube-api-access-qq5cj" (OuterVolumeSpecName: "kube-api-access-qq5cj") pod "469b9424-2e6d-48e0-abc6-0076b618d2a3" (UID: "469b9424-2e6d-48e0-abc6-0076b618d2a3"). InnerVolumeSpecName "kube-api-access-qq5cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:11:17 crc kubenswrapper[4813]: I0217 09:11:17.993583 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-scripts" (OuterVolumeSpecName: "scripts") pod "469b9424-2e6d-48e0-abc6-0076b618d2a3" (UID: "469b9424-2e6d-48e0-abc6-0076b618d2a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.001502 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "469b9424-2e6d-48e0-abc6-0076b618d2a3" (UID: "469b9424-2e6d-48e0-abc6-0076b618d2a3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.052517 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "469b9424-2e6d-48e0-abc6-0076b618d2a3" (UID: "469b9424-2e6d-48e0-abc6-0076b618d2a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.055162 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "469b9424-2e6d-48e0-abc6-0076b618d2a3" (UID: "469b9424-2e6d-48e0-abc6-0076b618d2a3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.077961 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.077992 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq5cj\" (UniqueName: \"kubernetes.io/projected/469b9424-2e6d-48e0-abc6-0076b618d2a3-kube-api-access-qq5cj\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.078002 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/469b9424-2e6d-48e0-abc6-0076b618d2a3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.078011 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.078021 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.078030 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.080846 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-config-data" (OuterVolumeSpecName: "config-data") pod "469b9424-2e6d-48e0-abc6-0076b618d2a3" (UID: "469b9424-2e6d-48e0-abc6-0076b618d2a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.179628 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469b9424-2e6d-48e0-abc6-0076b618d2a3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.411691 4813 generic.go:334] "Generic (PLEG): container finished" podID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerID="a910727c70d5d91c005776d5ac2b7d687f862ddcb45eaa0b4b269f1b1833b312" exitCode=0 Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.412037 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"469b9424-2e6d-48e0-abc6-0076b618d2a3","Type":"ContainerDied","Data":"a910727c70d5d91c005776d5ac2b7d687f862ddcb45eaa0b4b269f1b1833b312"} Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.412065 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"469b9424-2e6d-48e0-abc6-0076b618d2a3","Type":"ContainerDied","Data":"48c605ba23de02dbebdb5fd63a7e8644f9c0669a46662d3f33153b7b5f15726f"} Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.412087 4813 scope.go:117] "RemoveContainer" containerID="8a7e36fcbe19f58f7349eb3ba902b0b1438f2f64b477e1f49ce0e1191251e4cb" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.412208 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.443131 4813 scope.go:117] "RemoveContainer" containerID="e9285b2fec3da78a37eeffe53816c6ec25995183de2de289640553c87609badd" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.453897 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.465409 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.476940 4813 scope.go:117] "RemoveContainer" containerID="a910727c70d5d91c005776d5ac2b7d687f862ddcb45eaa0b4b269f1b1833b312" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.497188 4813 scope.go:117] "RemoveContainer" containerID="dbcb53d894e2a62e732958074c759dfd6846491e4de289fbe8e7dc76ed4a5107" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507074 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:11:18 crc kubenswrapper[4813]: E0217 09:11:18.507503 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="proxy-httpd" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507518 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="proxy-httpd" Feb 17 09:11:18 crc kubenswrapper[4813]: E0217 09:11:18.507536 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97c34c8-66f9-406a-b988-14449fcc40b0" containerName="watcher-applier" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507544 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97c34c8-66f9-406a-b988-14449fcc40b0" containerName="watcher-applier" Feb 17 09:11:18 crc kubenswrapper[4813]: E0217 09:11:18.507554 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01fe088-dd12-4772-93a5-03e4c0cee445" containerName="watcher-kuttl-api-log" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507561 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01fe088-dd12-4772-93a5-03e4c0cee445" containerName="watcher-kuttl-api-log" Feb 17 09:11:18 crc kubenswrapper[4813]: E0217 09:11:18.507570 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="sg-core" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507576 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="sg-core" Feb 17 09:11:18 crc kubenswrapper[4813]: E0217 09:11:18.507586 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="ceilometer-central-agent" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507592 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="ceilometer-central-agent" Feb 17 09:11:18 crc kubenswrapper[4813]: E0217 09:11:18.507599 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01fe088-dd12-4772-93a5-03e4c0cee445" containerName="watcher-api" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507604 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01fe088-dd12-4772-93a5-03e4c0cee445" containerName="watcher-api" Feb 17 09:11:18 crc kubenswrapper[4813]: E0217 09:11:18.507618 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="ceilometer-notification-agent" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507624 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="ceilometer-notification-agent" Feb 17 09:11:18 crc kubenswrapper[4813]: E0217 09:11:18.507636 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d" containerName="mariadb-account-delete" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507642 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d" containerName="mariadb-account-delete" Feb 17 09:11:18 crc kubenswrapper[4813]: E0217 09:11:18.507655 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae336ae-5af7-44f0-aa57-e1ab5da43770" containerName="watcher-decision-engine" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507662 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae336ae-5af7-44f0-aa57-e1ab5da43770" containerName="watcher-decision-engine" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507798 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae336ae-5af7-44f0-aa57-e1ab5da43770" containerName="watcher-decision-engine" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507808 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f01fe088-dd12-4772-93a5-03e4c0cee445" containerName="watcher-api" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507821 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="ceilometer-central-agent" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507830 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d97c34c8-66f9-406a-b988-14449fcc40b0" containerName="watcher-applier" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507836 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f01fe088-dd12-4772-93a5-03e4c0cee445" containerName="watcher-kuttl-api-log" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507845 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="ceilometer-notification-agent" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507854 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="proxy-httpd" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507862 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" containerName="sg-core" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.507869 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d" containerName="mariadb-account-delete" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.509576 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.513625 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.513638 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.519012 4813 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.533556 4813 scope.go:117] "RemoveContainer" containerID="8a7e36fcbe19f58f7349eb3ba902b0b1438f2f64b477e1f49ce0e1191251e4cb" Feb 17 09:11:18 crc kubenswrapper[4813]: E0217 09:11:18.534015 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a7e36fcbe19f58f7349eb3ba902b0b1438f2f64b477e1f49ce0e1191251e4cb\": container with ID starting with 8a7e36fcbe19f58f7349eb3ba902b0b1438f2f64b477e1f49ce0e1191251e4cb not found: ID does not exist" containerID="8a7e36fcbe19f58f7349eb3ba902b0b1438f2f64b477e1f49ce0e1191251e4cb" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.534046 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a7e36fcbe19f58f7349eb3ba902b0b1438f2f64b477e1f49ce0e1191251e4cb"} err="failed to get container status \"8a7e36fcbe19f58f7349eb3ba902b0b1438f2f64b477e1f49ce0e1191251e4cb\": rpc error: code = NotFound desc = could not find container \"8a7e36fcbe19f58f7349eb3ba902b0b1438f2f64b477e1f49ce0e1191251e4cb\": container with ID starting with 8a7e36fcbe19f58f7349eb3ba902b0b1438f2f64b477e1f49ce0e1191251e4cb not found: ID does not exist" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.534079 4813 scope.go:117] "RemoveContainer" containerID="e9285b2fec3da78a37eeffe53816c6ec25995183de2de289640553c87609badd" Feb 17 09:11:18 crc kubenswrapper[4813]: E0217 09:11:18.535420 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9285b2fec3da78a37eeffe53816c6ec25995183de2de289640553c87609badd\": container with ID starting with e9285b2fec3da78a37eeffe53816c6ec25995183de2de289640553c87609badd not found: ID does not exist" containerID="e9285b2fec3da78a37eeffe53816c6ec25995183de2de289640553c87609badd" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.535473 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9285b2fec3da78a37eeffe53816c6ec25995183de2de289640553c87609badd"} err="failed to get container status \"e9285b2fec3da78a37eeffe53816c6ec25995183de2de289640553c87609badd\": rpc error: code = NotFound desc = could not find container \"e9285b2fec3da78a37eeffe53816c6ec25995183de2de289640553c87609badd\": container with ID starting with e9285b2fec3da78a37eeffe53816c6ec25995183de2de289640553c87609badd not found: ID does not exist" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.535503 4813 scope.go:117] "RemoveContainer" containerID="a910727c70d5d91c005776d5ac2b7d687f862ddcb45eaa0b4b269f1b1833b312" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.536587 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:11:18 crc kubenswrapper[4813]: E0217 09:11:18.537150 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a910727c70d5d91c005776d5ac2b7d687f862ddcb45eaa0b4b269f1b1833b312\": container with ID starting with a910727c70d5d91c005776d5ac2b7d687f862ddcb45eaa0b4b269f1b1833b312 not found: ID does not exist" containerID="a910727c70d5d91c005776d5ac2b7d687f862ddcb45eaa0b4b269f1b1833b312" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.537187 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a910727c70d5d91c005776d5ac2b7d687f862ddcb45eaa0b4b269f1b1833b312"} err="failed to get container status \"a910727c70d5d91c005776d5ac2b7d687f862ddcb45eaa0b4b269f1b1833b312\": rpc error: code = NotFound desc = could not find container \"a910727c70d5d91c005776d5ac2b7d687f862ddcb45eaa0b4b269f1b1833b312\": container with ID starting with a910727c70d5d91c005776d5ac2b7d687f862ddcb45eaa0b4b269f1b1833b312 not found: ID does not exist" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.537206 4813 scope.go:117] "RemoveContainer" containerID="dbcb53d894e2a62e732958074c759dfd6846491e4de289fbe8e7dc76ed4a5107" Feb 17 09:11:18 crc kubenswrapper[4813]: E0217 09:11:18.537966 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbcb53d894e2a62e732958074c759dfd6846491e4de289fbe8e7dc76ed4a5107\": container with ID starting with dbcb53d894e2a62e732958074c759dfd6846491e4de289fbe8e7dc76ed4a5107 not found: ID does not exist" containerID="dbcb53d894e2a62e732958074c759dfd6846491e4de289fbe8e7dc76ed4a5107" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.537997 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbcb53d894e2a62e732958074c759dfd6846491e4de289fbe8e7dc76ed4a5107"} err="failed to get container status \"dbcb53d894e2a62e732958074c759dfd6846491e4de289fbe8e7dc76ed4a5107\": rpc error: code = NotFound desc = could not find container \"dbcb53d894e2a62e732958074c759dfd6846491e4de289fbe8e7dc76ed4a5107\": container with ID starting with dbcb53d894e2a62e732958074c759dfd6846491e4de289fbe8e7dc76ed4a5107 not found: ID does not exist" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.586289 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e499d50-6243-48a3-a11c-e264b891a21d-log-httpd\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.586408 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e499d50-6243-48a3-a11c-e264b891a21d-config-data\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.586502 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e499d50-6243-48a3-a11c-e264b891a21d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.586527 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e499d50-6243-48a3-a11c-e264b891a21d-run-httpd\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.586556 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e499d50-6243-48a3-a11c-e264b891a21d-scripts\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.586652 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q47vx\" (UniqueName: \"kubernetes.io/projected/6e499d50-6243-48a3-a11c-e264b891a21d-kube-api-access-q47vx\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.586727 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e499d50-6243-48a3-a11c-e264b891a21d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.586795 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e499d50-6243-48a3-a11c-e264b891a21d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.688707 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e499d50-6243-48a3-a11c-e264b891a21d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.688760 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e499d50-6243-48a3-a11c-e264b891a21d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.689441 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e499d50-6243-48a3-a11c-e264b891a21d-log-httpd\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.688783 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e499d50-6243-48a3-a11c-e264b891a21d-log-httpd\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.689701 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e499d50-6243-48a3-a11c-e264b891a21d-config-data\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.690549 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e499d50-6243-48a3-a11c-e264b891a21d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.690617 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e499d50-6243-48a3-a11c-e264b891a21d-run-httpd\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.690665 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e499d50-6243-48a3-a11c-e264b891a21d-scripts\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.690703 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q47vx\" (UniqueName: \"kubernetes.io/projected/6e499d50-6243-48a3-a11c-e264b891a21d-kube-api-access-q47vx\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.690926 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e499d50-6243-48a3-a11c-e264b891a21d-run-httpd\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.693835 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e499d50-6243-48a3-a11c-e264b891a21d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.694005 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e499d50-6243-48a3-a11c-e264b891a21d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.695749 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e499d50-6243-48a3-a11c-e264b891a21d-config-data\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.698290 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e499d50-6243-48a3-a11c-e264b891a21d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.699061 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e499d50-6243-48a3-a11c-e264b891a21d-scripts\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.706395 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q47vx\" (UniqueName: \"kubernetes.io/projected/6e499d50-6243-48a3-a11c-e264b891a21d-kube-api-access-q47vx\") pod \"ceilometer-0\" (UID: \"6e499d50-6243-48a3-a11c-e264b891a21d\") " pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.831007 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.986942 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-hf9z7"] Feb 17 09:11:18 crc kubenswrapper[4813]: I0217 09:11:18.995324 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-hf9z7"] Feb 17 09:11:19 crc kubenswrapper[4813]: I0217 09:11:19.002919 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher95cc-account-delete-bld2m"] Feb 17 09:11:19 crc kubenswrapper[4813]: I0217 09:11:19.010080 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd"] Feb 17 09:11:19 crc kubenswrapper[4813]: I0217 09:11:19.015061 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher95cc-account-delete-bld2m"] Feb 17 09:11:19 crc kubenswrapper[4813]: I0217 09:11:19.019693 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-95cc-account-create-update-bl7xd"] Feb 17 09:11:19 crc kubenswrapper[4813]: I0217 09:11:19.111016 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:11:19 crc kubenswrapper[4813]: I0217 09:11:19.120582 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3170e2b7-c40d-4f17-8ef2-98415ccd30ea" path="/var/lib/kubelet/pods/3170e2b7-c40d-4f17-8ef2-98415ccd30ea/volumes" Feb 17 09:11:19 crc kubenswrapper[4813]: I0217 09:11:19.121133 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="469b9424-2e6d-48e0-abc6-0076b618d2a3" path="/var/lib/kubelet/pods/469b9424-2e6d-48e0-abc6-0076b618d2a3/volumes" Feb 17 09:11:19 crc kubenswrapper[4813]: I0217 09:11:19.121791 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609" path="/var/lib/kubelet/pods/5fb2d5ee-a8e4-4e5b-87b5-a28d0ea9d609/volumes" Feb 17 09:11:19 crc kubenswrapper[4813]: I0217 09:11:19.122923 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ae336ae-5af7-44f0-aa57-e1ab5da43770" path="/var/lib/kubelet/pods/9ae336ae-5af7-44f0-aa57-e1ab5da43770/volumes" Feb 17 09:11:19 crc kubenswrapper[4813]: I0217 09:11:19.123497 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d" path="/var/lib/kubelet/pods/bf5d2a37-e9b1-4754-8db7-3bd18cb24d8d/volumes" Feb 17 09:11:19 crc kubenswrapper[4813]: I0217 09:11:19.340094 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Feb 17 09:11:19 crc kubenswrapper[4813]: I0217 09:11:19.422273 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e499d50-6243-48a3-a11c-e264b891a21d","Type":"ContainerStarted","Data":"5b330eeb8ebb65a86ae7f7517ab62487f8ed5980ff970f1455b6036cf2208b00"} Feb 17 09:11:19 crc kubenswrapper[4813]: I0217 09:11:19.424932 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerStarted","Data":"547bf834189b6e0ded62303fe339620a5821abad97ab1b7e1056ad7ed88e802b"} Feb 17 09:11:20 crc kubenswrapper[4813]: I0217 09:11:20.440856 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e499d50-6243-48a3-a11c-e264b891a21d","Type":"ContainerStarted","Data":"35189ac64c47c35bfa9a798cfa86d577c706c6fe2cbd18f3497da4e9753619d8"} Feb 17 09:11:21 crc kubenswrapper[4813]: I0217 09:11:21.450982 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e499d50-6243-48a3-a11c-e264b891a21d","Type":"ContainerStarted","Data":"f5010da768dd7a1f4dbd5c3cd3e9accbde9841924dff489dc82e38436e52397c"} Feb 17 09:11:22 crc kubenswrapper[4813]: I0217 09:11:22.462385 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e499d50-6243-48a3-a11c-e264b891a21d","Type":"ContainerStarted","Data":"97ffbba7ee213a22ce461a48a62736d63130b2b2a76a44de783e455c9ee8eca8"} Feb 17 09:11:23 crc kubenswrapper[4813]: I0217 09:11:23.472411 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e499d50-6243-48a3-a11c-e264b891a21d","Type":"ContainerStarted","Data":"719cdfbdcf95df0f86c35091c7778129177e9930a2effe43e3349b89c1fa88da"} Feb 17 09:11:23 crc kubenswrapper[4813]: I0217 09:11:23.472758 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:48 crc kubenswrapper[4813]: I0217 09:11:48.848301 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Feb 17 09:11:48 crc kubenswrapper[4813]: I0217 09:11:48.884230 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=27.220152213 podStartE2EDuration="30.884205747s" podCreationTimestamp="2026-02-17 09:11:18 +0000 UTC" firstStartedPulling="2026-02-17 09:11:19.331058766 +0000 UTC m=+1826.991819989" lastFinishedPulling="2026-02-17 09:11:22.9951123 +0000 UTC m=+1830.655873523" observedRunningTime="2026-02-17 09:11:23.521479858 +0000 UTC m=+1831.182241081" watchObservedRunningTime="2026-02-17 09:11:48.884205747 +0000 UTC m=+1856.544966970" Feb 17 09:11:51 crc kubenswrapper[4813]: I0217 09:11:51.243114 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6lrld/must-gather-hw7jb"] Feb 17 09:11:51 crc kubenswrapper[4813]: I0217 09:11:51.244589 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lrld/must-gather-hw7jb" Feb 17 09:11:51 crc kubenswrapper[4813]: I0217 09:11:51.248226 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6lrld"/"openshift-service-ca.crt" Feb 17 09:11:51 crc kubenswrapper[4813]: I0217 09:11:51.248818 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6lrld"/"kube-root-ca.crt" Feb 17 09:11:51 crc kubenswrapper[4813]: I0217 09:11:51.279605 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6lrld/must-gather-hw7jb"] Feb 17 09:11:51 crc kubenswrapper[4813]: I0217 09:11:51.405679 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6a6ce433-9576-45df-8e22-e935702d27c1-must-gather-output\") pod \"must-gather-hw7jb\" (UID: \"6a6ce433-9576-45df-8e22-e935702d27c1\") " pod="openshift-must-gather-6lrld/must-gather-hw7jb" Feb 17 09:11:51 crc kubenswrapper[4813]: I0217 09:11:51.405902 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n47ch\" (UniqueName: \"kubernetes.io/projected/6a6ce433-9576-45df-8e22-e935702d27c1-kube-api-access-n47ch\") pod \"must-gather-hw7jb\" (UID: \"6a6ce433-9576-45df-8e22-e935702d27c1\") " pod="openshift-must-gather-6lrld/must-gather-hw7jb" Feb 17 09:11:51 crc kubenswrapper[4813]: I0217 09:11:51.507350 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n47ch\" (UniqueName: \"kubernetes.io/projected/6a6ce433-9576-45df-8e22-e935702d27c1-kube-api-access-n47ch\") pod \"must-gather-hw7jb\" (UID: \"6a6ce433-9576-45df-8e22-e935702d27c1\") " pod="openshift-must-gather-6lrld/must-gather-hw7jb" Feb 17 09:11:51 crc kubenswrapper[4813]: I0217 09:11:51.507458 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6a6ce433-9576-45df-8e22-e935702d27c1-must-gather-output\") pod \"must-gather-hw7jb\" (UID: \"6a6ce433-9576-45df-8e22-e935702d27c1\") " pod="openshift-must-gather-6lrld/must-gather-hw7jb" Feb 17 09:11:51 crc kubenswrapper[4813]: I0217 09:11:51.507927 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6a6ce433-9576-45df-8e22-e935702d27c1-must-gather-output\") pod \"must-gather-hw7jb\" (UID: \"6a6ce433-9576-45df-8e22-e935702d27c1\") " pod="openshift-must-gather-6lrld/must-gather-hw7jb" Feb 17 09:11:51 crc kubenswrapper[4813]: I0217 09:11:51.525855 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n47ch\" (UniqueName: \"kubernetes.io/projected/6a6ce433-9576-45df-8e22-e935702d27c1-kube-api-access-n47ch\") pod \"must-gather-hw7jb\" (UID: \"6a6ce433-9576-45df-8e22-e935702d27c1\") " pod="openshift-must-gather-6lrld/must-gather-hw7jb" Feb 17 09:11:51 crc kubenswrapper[4813]: I0217 09:11:51.564516 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lrld/must-gather-hw7jb" Feb 17 09:11:52 crc kubenswrapper[4813]: I0217 09:11:52.052085 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6lrld/must-gather-hw7jb"] Feb 17 09:11:52 crc kubenswrapper[4813]: I0217 09:11:52.059626 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 09:11:52 crc kubenswrapper[4813]: I0217 09:11:52.756327 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lrld/must-gather-hw7jb" event={"ID":"6a6ce433-9576-45df-8e22-e935702d27c1","Type":"ContainerStarted","Data":"fd3a01e5dfb13c50873277351331882b479bda13fd7a8cc972aa1adaf4c560c7"} Feb 17 09:12:00 crc kubenswrapper[4813]: I0217 09:12:00.842743 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lrld/must-gather-hw7jb" event={"ID":"6a6ce433-9576-45df-8e22-e935702d27c1","Type":"ContainerStarted","Data":"7820b8f3fd2deba9b30f7345fbcfa715af4f165ae7e90a0b13548725d4c5ac0d"} Feb 17 09:12:01 crc kubenswrapper[4813]: I0217 09:12:01.820769 4813 scope.go:117] "RemoveContainer" containerID="687a2b900e7051ee434fcbcf534a0602b0ffb38c9f4b802da5b38745a722ae50" Feb 17 09:12:01 crc kubenswrapper[4813]: I0217 09:12:01.854009 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lrld/must-gather-hw7jb" event={"ID":"6a6ce433-9576-45df-8e22-e935702d27c1","Type":"ContainerStarted","Data":"a94e083d5e381b0418cfa53937ae64608e56a09bfa347ad92b56ed85b4fd6be4"} Feb 17 09:12:01 crc kubenswrapper[4813]: I0217 09:12:01.875831 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6lrld/must-gather-hw7jb" podStartSLOduration=2.6036261339999998 podStartE2EDuration="10.87581136s" podCreationTimestamp="2026-02-17 09:11:51 +0000 UTC" firstStartedPulling="2026-02-17 09:11:52.059575975 +0000 UTC m=+1859.720337198" lastFinishedPulling="2026-02-17 09:12:00.331761191 +0000 UTC m=+1867.992522424" observedRunningTime="2026-02-17 09:12:01.870765456 +0000 UTC m=+1869.531526699" watchObservedRunningTime="2026-02-17 09:12:01.87581136 +0000 UTC m=+1869.536572583" Feb 17 09:12:01 crc kubenswrapper[4813]: I0217 09:12:01.877885 4813 scope.go:117] "RemoveContainer" containerID="70b8d0872583601cec11540c208de75aec01fd00faacce13922c63d599db98ce" Feb 17 09:12:01 crc kubenswrapper[4813]: I0217 09:12:01.901645 4813 scope.go:117] "RemoveContainer" containerID="bf9070105d0566c671684053c8d7b044c62f7e7fcae0686018656c9385189999" Feb 17 09:12:01 crc kubenswrapper[4813]: I0217 09:12:01.950517 4813 scope.go:117] "RemoveContainer" containerID="a9d0d3e429cb0c338b619256b79cad9bdb6a28e106283ef12137736ea16d47de" Feb 17 09:13:02 crc kubenswrapper[4813]: I0217 09:13:02.086942 4813 scope.go:117] "RemoveContainer" containerID="459ca85c1f415e76aed59b529d21945b6b0af1ade69a6b21937f7fa8b64b3820" Feb 17 09:13:02 crc kubenswrapper[4813]: I0217 09:13:02.135406 4813 scope.go:117] "RemoveContainer" containerID="b08eed0991b86ea479de56fdc681732c3942f2fb1c9ce6c81440dbf4cae6b868" Feb 17 09:13:02 crc kubenswrapper[4813]: I0217 09:13:02.162545 4813 scope.go:117] "RemoveContainer" containerID="48b5263416213597e92c2a126e8567c18d9e49ae9e9f0cbbacd991ec2e795c7c" Feb 17 09:13:02 crc kubenswrapper[4813]: I0217 09:13:02.192264 4813 scope.go:117] "RemoveContainer" containerID="749ff068a2a768a739001284bf93466a5b9a4051fd5a6b597824b36299d815e3" Feb 17 09:13:02 crc kubenswrapper[4813]: I0217 09:13:02.216622 4813 scope.go:117] "RemoveContainer" containerID="8b38453ed88cba3c9d6b0c7cd5f0919fe5c550faa5257d5f2098b1c1f57d1b06" Feb 17 09:13:13 crc kubenswrapper[4813]: I0217 09:13:13.474356 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm_f6226cf6-afa1-48af-8aa9-6f0191f76fa6/util/0.log" Feb 17 09:13:13 crc kubenswrapper[4813]: I0217 09:13:13.631283 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm_f6226cf6-afa1-48af-8aa9-6f0191f76fa6/util/0.log" Feb 17 09:13:13 crc kubenswrapper[4813]: I0217 09:13:13.636169 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm_f6226cf6-afa1-48af-8aa9-6f0191f76fa6/pull/0.log" Feb 17 09:13:13 crc kubenswrapper[4813]: I0217 09:13:13.735576 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm_f6226cf6-afa1-48af-8aa9-6f0191f76fa6/pull/0.log" Feb 17 09:13:13 crc kubenswrapper[4813]: I0217 09:13:13.911359 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm_f6226cf6-afa1-48af-8aa9-6f0191f76fa6/extract/0.log" Feb 17 09:13:13 crc kubenswrapper[4813]: I0217 09:13:13.918097 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm_f6226cf6-afa1-48af-8aa9-6f0191f76fa6/pull/0.log" Feb 17 09:13:13 crc kubenswrapper[4813]: I0217 09:13:13.933247 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_512337696790b38705f36adc46343d5d3d3e9b95a5cdbbc0ee5879f601gx8mm_f6226cf6-afa1-48af-8aa9-6f0191f76fa6/util/0.log" Feb 17 09:13:14 crc kubenswrapper[4813]: I0217 09:13:14.076408 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg_1d59ab30-e7e4-4056-a1ad-2ee71696466c/util/0.log" Feb 17 09:13:14 crc kubenswrapper[4813]: I0217 09:13:14.277140 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg_1d59ab30-e7e4-4056-a1ad-2ee71696466c/util/0.log" Feb 17 09:13:14 crc kubenswrapper[4813]: I0217 09:13:14.280321 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg_1d59ab30-e7e4-4056-a1ad-2ee71696466c/pull/0.log" Feb 17 09:13:14 crc kubenswrapper[4813]: I0217 09:13:14.331855 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg_1d59ab30-e7e4-4056-a1ad-2ee71696466c/pull/0.log" Feb 17 09:13:14 crc kubenswrapper[4813]: I0217 09:13:14.464698 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg_1d59ab30-e7e4-4056-a1ad-2ee71696466c/pull/0.log" Feb 17 09:13:14 crc kubenswrapper[4813]: I0217 09:13:14.470243 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg_1d59ab30-e7e4-4056-a1ad-2ee71696466c/extract/0.log" Feb 17 09:13:14 crc kubenswrapper[4813]: I0217 09:13:14.493149 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_69fd5f937ff344d5ad8963bc66e11c74507d0ab75326b7adb48a42ae68ks4xg_1d59ab30-e7e4-4056-a1ad-2ee71696466c/util/0.log" Feb 17 09:13:14 crc kubenswrapper[4813]: I0217 09:13:14.922404 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-55cc45767f-r79hm_f8cae50b-944c-4dfd-8cae-5275b9290a07/manager/0.log" Feb 17 09:13:15 crc kubenswrapper[4813]: I0217 09:13:15.157492 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68c6d499cb-zhnzs_0626f4b2-1593-4b46-972d-079f3fe29ce3/manager/0.log" Feb 17 09:13:15 crc kubenswrapper[4813]: I0217 09:13:15.492882 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-9595d6797-c6plm_daecd5b7-6576-4ddf-bb48-2131c26a9995/manager/0.log" Feb 17 09:13:15 crc kubenswrapper[4813]: I0217 09:13:15.992727 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-54fb488b88-vxh2x_7e6fd8d2-9aeb-432a-9c01-e22332432a28/manager/0.log" Feb 17 09:13:16 crc kubenswrapper[4813]: I0217 09:13:16.084518 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-57746b5ff9-mkmzp_b58443c8-72d6-42ba-a920-9c11a9bc6b6e/manager/0.log" Feb 17 09:13:16 crc kubenswrapper[4813]: I0217 09:13:16.422652 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6494cdbf8f-h5hbm_5c6a587a-9a0b-458f-aea4-445dbcfdaecc/manager/0.log" Feb 17 09:13:16 crc kubenswrapper[4813]: I0217 09:13:16.532458 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-66d6b5f488-flpcz_ff0f7626-5da3-4763-8ae6-714ede4a2445/manager/0.log" Feb 17 09:13:16 crc kubenswrapper[4813]: I0217 09:13:16.844909 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-96fff9cb8-cktr5_39e9a182-3baa-4d60-ac63-00d40443be7b/manager/0.log" Feb 17 09:13:16 crc kubenswrapper[4813]: I0217 09:13:16.848566 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-6c78d668d5-k4qmx_4bfccf88-ba10-4e4e-a6f8-d3d7a362990d/manager/0.log" Feb 17 09:13:17 crc kubenswrapper[4813]: I0217 09:13:17.127523 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66997756f6-8ppzs_0a961df2-a4a7-431d-a389-1cafd967a0bc/manager/0.log" Feb 17 09:13:17 crc kubenswrapper[4813]: I0217 09:13:17.295634 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54967dbbdf-7znk6_d656660c-1dd3-4c91-9ef7-12248f1f388a/manager/0.log" Feb 17 09:13:17 crc kubenswrapper[4813]: I0217 09:13:17.370442 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5ddd85db87-wbbfl_28206bb6-553c-4ccd-bb15-8c42c7f34415/manager/0.log" Feb 17 09:13:17 crc kubenswrapper[4813]: I0217 09:13:17.741526 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-c5677dc5d-fv48m_c4ef02fa-778f-4072-b15c-a8e98631c083/manager/0.log" Feb 17 09:13:18 crc kubenswrapper[4813]: I0217 09:13:18.108096 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-59h5m_63faca9e-4dcf-4e1f-a3d7-077b0d8e593f/registry-server/0.log" Feb 17 09:13:18 crc kubenswrapper[4813]: I0217 09:13:18.339829 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-85c99d655-zzj6k_6451fc3e-e020-442e-b5d2-7e1094379337/manager/0.log" Feb 17 09:13:18 crc kubenswrapper[4813]: I0217 09:13:18.666782 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57bd55f9b7-4sccr_8e7a72dd-76e6-47b0-8d51-aad9504620c0/manager/0.log" Feb 17 09:13:18 crc kubenswrapper[4813]: I0217 09:13:18.811230 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6bf8b7b945-pqgpx_d9a0e392-aeea-4033-939e-52e42ebf3fa5/manager/0.log" Feb 17 09:13:18 crc kubenswrapper[4813]: I0217 09:13:18.922442 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-xlfjs_c2c4389e-21f4-4c70-9bc5-2eb9b93ad2cf/operator/0.log" Feb 17 09:13:19 crc kubenswrapper[4813]: I0217 09:13:19.103196 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-79558bbfbf-w9mll_d633c51f-1eea-4111-a46d-199e2f203c14/manager/0.log" Feb 17 09:13:19 crc kubenswrapper[4813]: I0217 09:13:19.213836 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-745bbbd77b-84kxx_08d18b9c-b137-4735-9e80-95636feac4ed/manager/0.log" Feb 17 09:13:19 crc kubenswrapper[4813]: I0217 09:13:19.742634 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8467ccb4c8-gwmhd_1274f46d-7df1-478f-ad9f-4df095082c3a/manager/0.log" Feb 17 09:13:20 crc kubenswrapper[4813]: I0217 09:13:20.040935 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-56dc67d744-dljh7_cfd95ad0-3c25-4884-a5fa-d91d1f771c1e/manager/0.log" Feb 17 09:13:20 crc kubenswrapper[4813]: I0217 09:13:20.078577 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-c6sg7_bc164bd8-d76c-4e04-b474-464d2e7785aa/registry-server/0.log" Feb 17 09:13:20 crc kubenswrapper[4813]: I0217 09:13:20.521328 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5fc5bdfc99-vm9tf_4fbd84c2-210c-4de7-8379-b9ea8cd4cd3b/manager/0.log" Feb 17 09:13:21 crc kubenswrapper[4813]: I0217 09:13:21.949605 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-c4b7d6946-hz7df_0fdf5f90-ddf7-4c01-ba25-037628a298fb/manager/0.log" Feb 17 09:13:35 crc kubenswrapper[4813]: I0217 09:13:35.165147 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 09:13:35 crc kubenswrapper[4813]: I0217 09:13:35.165615 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 09:13:42 crc kubenswrapper[4813]: I0217 09:13:42.415942 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-8j65n_e5f29fce-8bc9-48cb-b808-f55cb2e25c31/control-plane-machine-set-operator/0.log" Feb 17 09:13:42 crc kubenswrapper[4813]: I0217 09:13:42.785711 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gjzhl_bf4b8a5c-06c8-4206-852c-3e58e2e35bca/machine-api-operator/0.log" Feb 17 09:13:42 crc kubenswrapper[4813]: I0217 09:13:42.827073 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gjzhl_bf4b8a5c-06c8-4206-852c-3e58e2e35bca/kube-rbac-proxy/0.log" Feb 17 09:13:58 crc kubenswrapper[4813]: I0217 09:13:58.066030 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-xr8mp_af53b136-297b-434b-9ff8-47ba49480ed0/cert-manager-controller/0.log" Feb 17 09:13:58 crc kubenswrapper[4813]: I0217 09:13:58.249501 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-2vthk_b3e18af6-b78a-4769-b2d3-86769c0f5c93/cert-manager-cainjector/0.log" Feb 17 09:13:58 crc kubenswrapper[4813]: I0217 09:13:58.295590 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-xl8fx_7ce963af-f2c1-47af-85f7-658fa10e6394/cert-manager-webhook/0.log" Feb 17 09:14:02 crc kubenswrapper[4813]: I0217 09:14:02.333403 4813 scope.go:117] "RemoveContainer" containerID="583f6bf8e0f063e675f7c35e3e22c9a489439c94b813607cdedc4992240c64ad" Feb 17 09:14:02 crc kubenswrapper[4813]: I0217 09:14:02.392510 4813 scope.go:117] "RemoveContainer" containerID="c44a0bbf4ae0ae342537a299c251554551a8ec522aff52f1b829c1a84958682a" Feb 17 09:14:02 crc kubenswrapper[4813]: I0217 09:14:02.414769 4813 scope.go:117] "RemoveContainer" containerID="1ab39c2329e9404fe79387f57afbe8402760a160bb0e2023f0ad0e90823bf689" Feb 17 09:14:02 crc kubenswrapper[4813]: I0217 09:14:02.470628 4813 scope.go:117] "RemoveContainer" containerID="d67e4de8cc5397ef02565469276ba188340656e58982f51825a5ab7de71676a3" Feb 17 09:14:05 crc kubenswrapper[4813]: I0217 09:14:05.165680 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 09:14:05 crc kubenswrapper[4813]: I0217 09:14:05.166027 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 09:14:12 crc kubenswrapper[4813]: I0217 09:14:12.921032 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-j9ltg_162fb627-930f-44ba-891b-34d91fda1558/nmstate-console-plugin/0.log" Feb 17 09:14:13 crc kubenswrapper[4813]: I0217 09:14:13.215343 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-k95j2_c51e43a3-1bfa-4e7d-9f1b-f681e4bb82cd/nmstate-handler/0.log" Feb 17 09:14:13 crc kubenswrapper[4813]: I0217 09:14:13.288649 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-f5sws_68527739-5299-42bc-9b81-16ed9c46f0d0/kube-rbac-proxy/0.log" Feb 17 09:14:13 crc kubenswrapper[4813]: I0217 09:14:13.370163 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-f5sws_68527739-5299-42bc-9b81-16ed9c46f0d0/nmstate-metrics/0.log" Feb 17 09:14:13 crc kubenswrapper[4813]: I0217 09:14:13.566580 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-dr2hx_613d6923-adfe-48e3-a162-941d317ec5fc/nmstate-operator/0.log" Feb 17 09:14:13 crc kubenswrapper[4813]: I0217 09:14:13.589274 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-pg2bp_e8f29774-0876-4d40-aa50-3ba424ae667c/nmstate-webhook/0.log" Feb 17 09:14:24 crc kubenswrapper[4813]: I0217 09:14:24.787152 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l5vqr"] Feb 17 09:14:24 crc kubenswrapper[4813]: I0217 09:14:24.789939 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:24 crc kubenswrapper[4813]: I0217 09:14:24.796706 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l5vqr"] Feb 17 09:14:24 crc kubenswrapper[4813]: I0217 09:14:24.850781 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b05c559-0f07-425b-8128-15b3338ef66d-catalog-content\") pod \"redhat-operators-l5vqr\" (UID: \"2b05c559-0f07-425b-8128-15b3338ef66d\") " pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:24 crc kubenswrapper[4813]: I0217 09:14:24.850864 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkz2m\" (UniqueName: \"kubernetes.io/projected/2b05c559-0f07-425b-8128-15b3338ef66d-kube-api-access-vkz2m\") pod \"redhat-operators-l5vqr\" (UID: \"2b05c559-0f07-425b-8128-15b3338ef66d\") " pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:24 crc kubenswrapper[4813]: I0217 09:14:24.850953 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b05c559-0f07-425b-8128-15b3338ef66d-utilities\") pod \"redhat-operators-l5vqr\" (UID: \"2b05c559-0f07-425b-8128-15b3338ef66d\") " pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:24 crc kubenswrapper[4813]: I0217 09:14:24.952755 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b05c559-0f07-425b-8128-15b3338ef66d-catalog-content\") pod \"redhat-operators-l5vqr\" (UID: \"2b05c559-0f07-425b-8128-15b3338ef66d\") " pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:24 crc kubenswrapper[4813]: I0217 09:14:24.952835 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkz2m\" (UniqueName: \"kubernetes.io/projected/2b05c559-0f07-425b-8128-15b3338ef66d-kube-api-access-vkz2m\") pod \"redhat-operators-l5vqr\" (UID: \"2b05c559-0f07-425b-8128-15b3338ef66d\") " pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:24 crc kubenswrapper[4813]: I0217 09:14:24.952859 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b05c559-0f07-425b-8128-15b3338ef66d-utilities\") pod \"redhat-operators-l5vqr\" (UID: \"2b05c559-0f07-425b-8128-15b3338ef66d\") " pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:24 crc kubenswrapper[4813]: I0217 09:14:24.953576 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b05c559-0f07-425b-8128-15b3338ef66d-utilities\") pod \"redhat-operators-l5vqr\" (UID: \"2b05c559-0f07-425b-8128-15b3338ef66d\") " pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:24 crc kubenswrapper[4813]: I0217 09:14:24.953599 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b05c559-0f07-425b-8128-15b3338ef66d-catalog-content\") pod \"redhat-operators-l5vqr\" (UID: \"2b05c559-0f07-425b-8128-15b3338ef66d\") " pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:24 crc kubenswrapper[4813]: I0217 09:14:24.971273 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkz2m\" (UniqueName: \"kubernetes.io/projected/2b05c559-0f07-425b-8128-15b3338ef66d-kube-api-access-vkz2m\") pod \"redhat-operators-l5vqr\" (UID: \"2b05c559-0f07-425b-8128-15b3338ef66d\") " pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:25 crc kubenswrapper[4813]: I0217 09:14:25.125712 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:25 crc kubenswrapper[4813]: I0217 09:14:25.625296 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l5vqr"] Feb 17 09:14:25 crc kubenswrapper[4813]: W0217 09:14:25.635720 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b05c559_0f07_425b_8128_15b3338ef66d.slice/crio-71c4003dbca2175cb858bf0972d41ba11adfb4308ccc3737e7f354cb91162c73 WatchSource:0}: Error finding container 71c4003dbca2175cb858bf0972d41ba11adfb4308ccc3737e7f354cb91162c73: Status 404 returned error can't find the container with id 71c4003dbca2175cb858bf0972d41ba11adfb4308ccc3737e7f354cb91162c73 Feb 17 09:14:26 crc kubenswrapper[4813]: I0217 09:14:26.074708 4813 generic.go:334] "Generic (PLEG): container finished" podID="2b05c559-0f07-425b-8128-15b3338ef66d" containerID="92f05fe866b934b85b55a678e4da9eef4cdeac799cb1679980c925a7dcecbbe1" exitCode=0 Feb 17 09:14:26 crc kubenswrapper[4813]: I0217 09:14:26.074752 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5vqr" event={"ID":"2b05c559-0f07-425b-8128-15b3338ef66d","Type":"ContainerDied","Data":"92f05fe866b934b85b55a678e4da9eef4cdeac799cb1679980c925a7dcecbbe1"} Feb 17 09:14:26 crc kubenswrapper[4813]: I0217 09:14:26.074778 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5vqr" event={"ID":"2b05c559-0f07-425b-8128-15b3338ef66d","Type":"ContainerStarted","Data":"71c4003dbca2175cb858bf0972d41ba11adfb4308ccc3737e7f354cb91162c73"} Feb 17 09:14:27 crc kubenswrapper[4813]: I0217 09:14:27.105141 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5vqr" event={"ID":"2b05c559-0f07-425b-8128-15b3338ef66d","Type":"ContainerStarted","Data":"1a3c6584246214d07ee137fc89b7c7aed15b20bf77d20d1d7bb880bbf41e6d20"} Feb 17 09:14:28 crc kubenswrapper[4813]: I0217 09:14:28.116775 4813 generic.go:334] "Generic (PLEG): container finished" podID="2b05c559-0f07-425b-8128-15b3338ef66d" containerID="1a3c6584246214d07ee137fc89b7c7aed15b20bf77d20d1d7bb880bbf41e6d20" exitCode=0 Feb 17 09:14:28 crc kubenswrapper[4813]: I0217 09:14:28.116838 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5vqr" event={"ID":"2b05c559-0f07-425b-8128-15b3338ef66d","Type":"ContainerDied","Data":"1a3c6584246214d07ee137fc89b7c7aed15b20bf77d20d1d7bb880bbf41e6d20"} Feb 17 09:14:29 crc kubenswrapper[4813]: I0217 09:14:29.126736 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5vqr" event={"ID":"2b05c559-0f07-425b-8128-15b3338ef66d","Type":"ContainerStarted","Data":"f4cecd20b3d0039dfd6762944e463aa13839dc2abcd660eca1ea779e704e8614"} Feb 17 09:14:29 crc kubenswrapper[4813]: I0217 09:14:29.151550 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l5vqr" podStartSLOduration=2.696670491 podStartE2EDuration="5.151529342s" podCreationTimestamp="2026-02-17 09:14:24 +0000 UTC" firstStartedPulling="2026-02-17 09:14:26.07619264 +0000 UTC m=+2013.736953863" lastFinishedPulling="2026-02-17 09:14:28.531051491 +0000 UTC m=+2016.191812714" observedRunningTime="2026-02-17 09:14:29.146742846 +0000 UTC m=+2016.807504059" watchObservedRunningTime="2026-02-17 09:14:29.151529342 +0000 UTC m=+2016.812290565" Feb 17 09:14:29 crc kubenswrapper[4813]: I0217 09:14:29.682183 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-jzdcd_8c42bb3e-30f2-484f-98d6-cc3d6209897a/prometheus-operator/0.log" Feb 17 09:14:29 crc kubenswrapper[4813]: I0217 09:14:29.874678 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5667669b-55qdl_3614da2d-8f3c-41bd-a31c-9d7fdef31fad/prometheus-operator-admission-webhook/0.log" Feb 17 09:14:29 crc kubenswrapper[4813]: I0217 09:14:29.927399 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5667669b-gvchv_898154a7-5f53-4b78-bd75-4c62b2e6cae1/prometheus-operator-admission-webhook/0.log" Feb 17 09:14:30 crc kubenswrapper[4813]: I0217 09:14:30.177335 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-gh8k5_123ad867-ebe9-4ea6-acfe-82f25010549e/observability-ui-dashboards/0.log" Feb 17 09:14:30 crc kubenswrapper[4813]: I0217 09:14:30.210379 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-629tg_05000f8d-0078-48a4-a118-e184c008b5d4/operator/0.log" Feb 17 09:14:30 crc kubenswrapper[4813]: I0217 09:14:30.361369 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-5xprj_c76875de-7ea3-431b-882a-e12415659320/perses-operator/0.log" Feb 17 09:14:35 crc kubenswrapper[4813]: I0217 09:14:35.126752 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:35 crc kubenswrapper[4813]: I0217 09:14:35.127416 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:35 crc kubenswrapper[4813]: I0217 09:14:35.165688 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 09:14:35 crc kubenswrapper[4813]: I0217 09:14:35.165942 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 09:14:35 crc kubenswrapper[4813]: I0217 09:14:35.165997 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 09:14:35 crc kubenswrapper[4813]: I0217 09:14:35.166787 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"547bf834189b6e0ded62303fe339620a5821abad97ab1b7e1056ad7ed88e802b"} pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 09:14:35 crc kubenswrapper[4813]: I0217 09:14:35.166852 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" containerID="cri-o://547bf834189b6e0ded62303fe339620a5821abad97ab1b7e1056ad7ed88e802b" gracePeriod=600 Feb 17 09:14:35 crc kubenswrapper[4813]: I0217 09:14:35.186437 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:35 crc kubenswrapper[4813]: I0217 09:14:35.243212 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:36 crc kubenswrapper[4813]: I0217 09:14:36.187705 4813 generic.go:334] "Generic (PLEG): container finished" podID="3a6ba827-b08b-4163-b067-d9adb119398d" containerID="547bf834189b6e0ded62303fe339620a5821abad97ab1b7e1056ad7ed88e802b" exitCode=0 Feb 17 09:14:36 crc kubenswrapper[4813]: I0217 09:14:36.188486 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerDied","Data":"547bf834189b6e0ded62303fe339620a5821abad97ab1b7e1056ad7ed88e802b"} Feb 17 09:14:36 crc kubenswrapper[4813]: I0217 09:14:36.188571 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerStarted","Data":"1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c"} Feb 17 09:14:36 crc kubenswrapper[4813]: I0217 09:14:36.188600 4813 scope.go:117] "RemoveContainer" containerID="4d30fdf6d317502142c8e3729390e17ab48972f47b9d726d8a614dfa36c28f58" Feb 17 09:14:38 crc kubenswrapper[4813]: I0217 09:14:38.036247 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-w54dd"] Feb 17 09:14:38 crc kubenswrapper[4813]: I0217 09:14:38.041328 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-w54dd"] Feb 17 09:14:39 crc kubenswrapper[4813]: I0217 09:14:39.124438 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cac6042-b341-4c35-8049-fe8b24d07179" path="/var/lib/kubelet/pods/1cac6042-b341-4c35-8049-fe8b24d07179/volumes" Feb 17 09:14:44 crc kubenswrapper[4813]: I0217 09:14:44.371284 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l5vqr"] Feb 17 09:14:44 crc kubenswrapper[4813]: I0217 09:14:44.372139 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l5vqr" podUID="2b05c559-0f07-425b-8128-15b3338ef66d" containerName="registry-server" containerID="cri-o://f4cecd20b3d0039dfd6762944e463aa13839dc2abcd660eca1ea779e704e8614" gracePeriod=2 Feb 17 09:14:44 crc kubenswrapper[4813]: I0217 09:14:44.780007 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:44 crc kubenswrapper[4813]: I0217 09:14:44.908181 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b05c559-0f07-425b-8128-15b3338ef66d-utilities\") pod \"2b05c559-0f07-425b-8128-15b3338ef66d\" (UID: \"2b05c559-0f07-425b-8128-15b3338ef66d\") " Feb 17 09:14:44 crc kubenswrapper[4813]: I0217 09:14:44.908287 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b05c559-0f07-425b-8128-15b3338ef66d-catalog-content\") pod \"2b05c559-0f07-425b-8128-15b3338ef66d\" (UID: \"2b05c559-0f07-425b-8128-15b3338ef66d\") " Feb 17 09:14:44 crc kubenswrapper[4813]: I0217 09:14:44.908371 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkz2m\" (UniqueName: \"kubernetes.io/projected/2b05c559-0f07-425b-8128-15b3338ef66d-kube-api-access-vkz2m\") pod \"2b05c559-0f07-425b-8128-15b3338ef66d\" (UID: \"2b05c559-0f07-425b-8128-15b3338ef66d\") " Feb 17 09:14:44 crc kubenswrapper[4813]: I0217 09:14:44.909263 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b05c559-0f07-425b-8128-15b3338ef66d-utilities" (OuterVolumeSpecName: "utilities") pod "2b05c559-0f07-425b-8128-15b3338ef66d" (UID: "2b05c559-0f07-425b-8128-15b3338ef66d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:14:44 crc kubenswrapper[4813]: I0217 09:14:44.914549 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b05c559-0f07-425b-8128-15b3338ef66d-kube-api-access-vkz2m" (OuterVolumeSpecName: "kube-api-access-vkz2m") pod "2b05c559-0f07-425b-8128-15b3338ef66d" (UID: "2b05c559-0f07-425b-8128-15b3338ef66d"). InnerVolumeSpecName "kube-api-access-vkz2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.009786 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b05c559-0f07-425b-8128-15b3338ef66d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.009820 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkz2m\" (UniqueName: \"kubernetes.io/projected/2b05c559-0f07-425b-8128-15b3338ef66d-kube-api-access-vkz2m\") on node \"crc\" DevicePath \"\"" Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.030293 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b05c559-0f07-425b-8128-15b3338ef66d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b05c559-0f07-425b-8128-15b3338ef66d" (UID: "2b05c559-0f07-425b-8128-15b3338ef66d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.111724 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b05c559-0f07-425b-8128-15b3338ef66d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.273799 4813 generic.go:334] "Generic (PLEG): container finished" podID="2b05c559-0f07-425b-8128-15b3338ef66d" containerID="f4cecd20b3d0039dfd6762944e463aa13839dc2abcd660eca1ea779e704e8614" exitCode=0 Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.273855 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5vqr" event={"ID":"2b05c559-0f07-425b-8128-15b3338ef66d","Type":"ContainerDied","Data":"f4cecd20b3d0039dfd6762944e463aa13839dc2abcd660eca1ea779e704e8614"} Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.273897 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5vqr" event={"ID":"2b05c559-0f07-425b-8128-15b3338ef66d","Type":"ContainerDied","Data":"71c4003dbca2175cb858bf0972d41ba11adfb4308ccc3737e7f354cb91162c73"} Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.273919 4813 scope.go:117] "RemoveContainer" containerID="f4cecd20b3d0039dfd6762944e463aa13839dc2abcd660eca1ea779e704e8614" Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.273922 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l5vqr" Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.316736 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l5vqr"] Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.318957 4813 scope.go:117] "RemoveContainer" containerID="1a3c6584246214d07ee137fc89b7c7aed15b20bf77d20d1d7bb880bbf41e6d20" Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.326252 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l5vqr"] Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.358508 4813 scope.go:117] "RemoveContainer" containerID="92f05fe866b934b85b55a678e4da9eef4cdeac799cb1679980c925a7dcecbbe1" Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.386021 4813 scope.go:117] "RemoveContainer" containerID="f4cecd20b3d0039dfd6762944e463aa13839dc2abcd660eca1ea779e704e8614" Feb 17 09:14:45 crc kubenswrapper[4813]: E0217 09:14:45.386459 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4cecd20b3d0039dfd6762944e463aa13839dc2abcd660eca1ea779e704e8614\": container with ID starting with f4cecd20b3d0039dfd6762944e463aa13839dc2abcd660eca1ea779e704e8614 not found: ID does not exist" containerID="f4cecd20b3d0039dfd6762944e463aa13839dc2abcd660eca1ea779e704e8614" Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.386487 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4cecd20b3d0039dfd6762944e463aa13839dc2abcd660eca1ea779e704e8614"} err="failed to get container status \"f4cecd20b3d0039dfd6762944e463aa13839dc2abcd660eca1ea779e704e8614\": rpc error: code = NotFound desc = could not find container \"f4cecd20b3d0039dfd6762944e463aa13839dc2abcd660eca1ea779e704e8614\": container with ID starting with f4cecd20b3d0039dfd6762944e463aa13839dc2abcd660eca1ea779e704e8614 not found: ID does not exist" Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.386508 4813 scope.go:117] "RemoveContainer" containerID="1a3c6584246214d07ee137fc89b7c7aed15b20bf77d20d1d7bb880bbf41e6d20" Feb 17 09:14:45 crc kubenswrapper[4813]: E0217 09:14:45.386718 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a3c6584246214d07ee137fc89b7c7aed15b20bf77d20d1d7bb880bbf41e6d20\": container with ID starting with 1a3c6584246214d07ee137fc89b7c7aed15b20bf77d20d1d7bb880bbf41e6d20 not found: ID does not exist" containerID="1a3c6584246214d07ee137fc89b7c7aed15b20bf77d20d1d7bb880bbf41e6d20" Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.386743 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a3c6584246214d07ee137fc89b7c7aed15b20bf77d20d1d7bb880bbf41e6d20"} err="failed to get container status \"1a3c6584246214d07ee137fc89b7c7aed15b20bf77d20d1d7bb880bbf41e6d20\": rpc error: code = NotFound desc = could not find container \"1a3c6584246214d07ee137fc89b7c7aed15b20bf77d20d1d7bb880bbf41e6d20\": container with ID starting with 1a3c6584246214d07ee137fc89b7c7aed15b20bf77d20d1d7bb880bbf41e6d20 not found: ID does not exist" Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.386756 4813 scope.go:117] "RemoveContainer" containerID="92f05fe866b934b85b55a678e4da9eef4cdeac799cb1679980c925a7dcecbbe1" Feb 17 09:14:45 crc kubenswrapper[4813]: E0217 09:14:45.386959 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92f05fe866b934b85b55a678e4da9eef4cdeac799cb1679980c925a7dcecbbe1\": container with ID starting with 92f05fe866b934b85b55a678e4da9eef4cdeac799cb1679980c925a7dcecbbe1 not found: ID does not exist" containerID="92f05fe866b934b85b55a678e4da9eef4cdeac799cb1679980c925a7dcecbbe1" Feb 17 09:14:45 crc kubenswrapper[4813]: I0217 09:14:45.386979 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92f05fe866b934b85b55a678e4da9eef4cdeac799cb1679980c925a7dcecbbe1"} err="failed to get container status \"92f05fe866b934b85b55a678e4da9eef4cdeac799cb1679980c925a7dcecbbe1\": rpc error: code = NotFound desc = could not find container \"92f05fe866b934b85b55a678e4da9eef4cdeac799cb1679980c925a7dcecbbe1\": container with ID starting with 92f05fe866b934b85b55a678e4da9eef4cdeac799cb1679980c925a7dcecbbe1 not found: ID does not exist" Feb 17 09:14:46 crc kubenswrapper[4813]: I0217 09:14:46.596118 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-rsh7g_947d8da1-a53f-43bf-b2b4-742ec2777803/kube-rbac-proxy/0.log" Feb 17 09:14:46 crc kubenswrapper[4813]: I0217 09:14:46.691011 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-rsh7g_947d8da1-a53f-43bf-b2b4-742ec2777803/controller/0.log" Feb 17 09:14:46 crc kubenswrapper[4813]: I0217 09:14:46.920064 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/cp-frr-files/0.log" Feb 17 09:14:47 crc kubenswrapper[4813]: I0217 09:14:47.068552 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/cp-reloader/0.log" Feb 17 09:14:47 crc kubenswrapper[4813]: I0217 09:14:47.092416 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/cp-frr-files/0.log" Feb 17 09:14:47 crc kubenswrapper[4813]: I0217 09:14:47.106381 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/cp-reloader/0.log" Feb 17 09:14:47 crc kubenswrapper[4813]: I0217 09:14:47.111925 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/cp-metrics/0.log" Feb 17 09:14:47 crc kubenswrapper[4813]: I0217 09:14:47.119778 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b05c559-0f07-425b-8128-15b3338ef66d" path="/var/lib/kubelet/pods/2b05c559-0f07-425b-8128-15b3338ef66d/volumes" Feb 17 09:14:47 crc kubenswrapper[4813]: I0217 09:14:47.349658 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/cp-reloader/0.log" Feb 17 09:14:47 crc kubenswrapper[4813]: I0217 09:14:47.352496 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/cp-metrics/0.log" Feb 17 09:14:47 crc kubenswrapper[4813]: I0217 09:14:47.410001 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/cp-metrics/0.log" Feb 17 09:14:47 crc kubenswrapper[4813]: I0217 09:14:47.422866 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/cp-frr-files/0.log" Feb 17 09:14:47 crc kubenswrapper[4813]: I0217 09:14:47.599336 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/cp-frr-files/0.log" Feb 17 09:14:47 crc kubenswrapper[4813]: I0217 09:14:47.613696 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/cp-metrics/0.log" Feb 17 09:14:47 crc kubenswrapper[4813]: I0217 09:14:47.634358 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/cp-reloader/0.log" Feb 17 09:14:47 crc kubenswrapper[4813]: I0217 09:14:47.657411 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/controller/0.log" Feb 17 09:14:47 crc kubenswrapper[4813]: I0217 09:14:47.799823 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/frr-metrics/0.log" Feb 17 09:14:47 crc kubenswrapper[4813]: I0217 09:14:47.801042 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/kube-rbac-proxy/0.log" Feb 17 09:14:47 crc kubenswrapper[4813]: I0217 09:14:47.898495 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/kube-rbac-proxy-frr/0.log" Feb 17 09:14:48 crc kubenswrapper[4813]: I0217 09:14:48.051659 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/reloader/0.log" Feb 17 09:14:48 crc kubenswrapper[4813]: I0217 09:14:48.103485 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-kn99d_c190f15b-e4c6-4fef-9857-654242e7512f/frr-k8s-webhook-server/0.log" Feb 17 09:14:48 crc kubenswrapper[4813]: I0217 09:14:48.300353 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5ff56d7b8b-xq8db_e852743f-2bfd-4b73-a5f9-2c56d356b99a/manager/0.log" Feb 17 09:14:48 crc kubenswrapper[4813]: I0217 09:14:48.662848 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vnsgt_d252311d-9fc5-4ecf-83de-51b9d7d371bb/kube-rbac-proxy/0.log" Feb 17 09:14:48 crc kubenswrapper[4813]: I0217 09:14:48.662986 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-85f69475bd-296r2_a20c117f-10e4-46ba-81cc-38c75428e6fc/webhook-server/0.log" Feb 17 09:14:48 crc kubenswrapper[4813]: I0217 09:14:48.878410 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pkkfc_6206ee5c-0d0e-470b-9f36-0e4a6e36d5fb/frr/0.log" Feb 17 09:14:49 crc kubenswrapper[4813]: I0217 09:14:49.291883 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-vnsgt_d252311d-9fc5-4ecf-83de-51b9d7d371bb/speaker/0.log" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.161126 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw"] Feb 17 09:15:00 crc kubenswrapper[4813]: E0217 09:15:00.161906 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b05c559-0f07-425b-8128-15b3338ef66d" containerName="extract-content" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.161920 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b05c559-0f07-425b-8128-15b3338ef66d" containerName="extract-content" Feb 17 09:15:00 crc kubenswrapper[4813]: E0217 09:15:00.161940 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b05c559-0f07-425b-8128-15b3338ef66d" containerName="extract-utilities" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.161948 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b05c559-0f07-425b-8128-15b3338ef66d" containerName="extract-utilities" Feb 17 09:15:00 crc kubenswrapper[4813]: E0217 09:15:00.161960 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b05c559-0f07-425b-8128-15b3338ef66d" containerName="registry-server" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.161969 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b05c559-0f07-425b-8128-15b3338ef66d" containerName="registry-server" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.162168 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b05c559-0f07-425b-8128-15b3338ef66d" containerName="registry-server" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.162769 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.164669 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.165519 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.172278 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw"] Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.251181 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrd4z\" (UniqueName: \"kubernetes.io/projected/0a6de549-5c82-4bd1-a207-e1cbc8724493-kube-api-access-hrd4z\") pod \"collect-profiles-29521995-db5hw\" (UID: \"0a6de549-5c82-4bd1-a207-e1cbc8724493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.251430 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6de549-5c82-4bd1-a207-e1cbc8724493-config-volume\") pod \"collect-profiles-29521995-db5hw\" (UID: \"0a6de549-5c82-4bd1-a207-e1cbc8724493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.251489 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a6de549-5c82-4bd1-a207-e1cbc8724493-secret-volume\") pod \"collect-profiles-29521995-db5hw\" (UID: \"0a6de549-5c82-4bd1-a207-e1cbc8724493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.353102 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6de549-5c82-4bd1-a207-e1cbc8724493-config-volume\") pod \"collect-profiles-29521995-db5hw\" (UID: \"0a6de549-5c82-4bd1-a207-e1cbc8724493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.353193 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a6de549-5c82-4bd1-a207-e1cbc8724493-secret-volume\") pod \"collect-profiles-29521995-db5hw\" (UID: \"0a6de549-5c82-4bd1-a207-e1cbc8724493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.353374 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrd4z\" (UniqueName: \"kubernetes.io/projected/0a6de549-5c82-4bd1-a207-e1cbc8724493-kube-api-access-hrd4z\") pod \"collect-profiles-29521995-db5hw\" (UID: \"0a6de549-5c82-4bd1-a207-e1cbc8724493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.354850 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6de549-5c82-4bd1-a207-e1cbc8724493-config-volume\") pod \"collect-profiles-29521995-db5hw\" (UID: \"0a6de549-5c82-4bd1-a207-e1cbc8724493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.361261 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a6de549-5c82-4bd1-a207-e1cbc8724493-secret-volume\") pod \"collect-profiles-29521995-db5hw\" (UID: \"0a6de549-5c82-4bd1-a207-e1cbc8724493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.372170 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrd4z\" (UniqueName: \"kubernetes.io/projected/0a6de549-5c82-4bd1-a207-e1cbc8724493-kube-api-access-hrd4z\") pod \"collect-profiles-29521995-db5hw\" (UID: \"0a6de549-5c82-4bd1-a207-e1cbc8724493\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.493067 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw" Feb 17 09:15:00 crc kubenswrapper[4813]: I0217 09:15:00.733537 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw"] Feb 17 09:15:01 crc kubenswrapper[4813]: I0217 09:15:01.403390 4813 generic.go:334] "Generic (PLEG): container finished" podID="0a6de549-5c82-4bd1-a207-e1cbc8724493" containerID="cae987c1cd63433eebaabeea71b05ba21cb74430589530a5b4772e710573824a" exitCode=0 Feb 17 09:15:01 crc kubenswrapper[4813]: I0217 09:15:01.403481 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw" event={"ID":"0a6de549-5c82-4bd1-a207-e1cbc8724493","Type":"ContainerDied","Data":"cae987c1cd63433eebaabeea71b05ba21cb74430589530a5b4772e710573824a"} Feb 17 09:15:01 crc kubenswrapper[4813]: I0217 09:15:01.403798 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw" event={"ID":"0a6de549-5c82-4bd1-a207-e1cbc8724493","Type":"ContainerStarted","Data":"4d0d21d742878363fc116204f8660bc0ade0c1e404739ce1573f06a576821266"} Feb 17 09:15:02 crc kubenswrapper[4813]: I0217 09:15:02.592671 4813 scope.go:117] "RemoveContainer" containerID="a9a7ba3924ec3adf494ceee70cdbc828731839a691b733f2f60fad9aeae69812" Feb 17 09:15:02 crc kubenswrapper[4813]: I0217 09:15:02.644111 4813 scope.go:117] "RemoveContainer" containerID="eb660b5deca9b71a2ae1508e94f6722b9c33e3f59b281a72515c268ed27439d2" Feb 17 09:15:02 crc kubenswrapper[4813]: I0217 09:15:02.681849 4813 scope.go:117] "RemoveContainer" containerID="9a2323754e71e2b69d3eafbcc8cd70fc1a2050ac1fdc90edb02c37dfac8a4bdb" Feb 17 09:15:02 crc kubenswrapper[4813]: I0217 09:15:02.773712 4813 scope.go:117] "RemoveContainer" containerID="a1f6c5382ccd3d2571ae05b670fc8f3c994ef621ed68697d569f318f514d0730" Feb 17 09:15:02 crc kubenswrapper[4813]: I0217 09:15:02.782691 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw" Feb 17 09:15:02 crc kubenswrapper[4813]: I0217 09:15:02.790560 4813 scope.go:117] "RemoveContainer" containerID="75bc40bdc7b1f376cbecb207bcf65260e9d02d628307ff22d3d370b07a1a8f23" Feb 17 09:15:02 crc kubenswrapper[4813]: I0217 09:15:02.836528 4813 scope.go:117] "RemoveContainer" containerID="72cf464838c665ad283dea4190e29e4562f2589d2de02902a0867d197f308370" Feb 17 09:15:02 crc kubenswrapper[4813]: I0217 09:15:02.872236 4813 scope.go:117] "RemoveContainer" containerID="f202b7b0291084e89d19f7f9d156ed7f94ba23677a17129108e8b7a45b647787" Feb 17 09:15:02 crc kubenswrapper[4813]: I0217 09:15:02.897971 4813 scope.go:117] "RemoveContainer" containerID="07507a2aa1c71de5bf4db5efb4923151e226d4b882083028e01254b60dcdcef3" Feb 17 09:15:02 crc kubenswrapper[4813]: I0217 09:15:02.906932 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrd4z\" (UniqueName: \"kubernetes.io/projected/0a6de549-5c82-4bd1-a207-e1cbc8724493-kube-api-access-hrd4z\") pod \"0a6de549-5c82-4bd1-a207-e1cbc8724493\" (UID: \"0a6de549-5c82-4bd1-a207-e1cbc8724493\") " Feb 17 09:15:02 crc kubenswrapper[4813]: I0217 09:15:02.907060 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a6de549-5c82-4bd1-a207-e1cbc8724493-secret-volume\") pod \"0a6de549-5c82-4bd1-a207-e1cbc8724493\" (UID: \"0a6de549-5c82-4bd1-a207-e1cbc8724493\") " Feb 17 09:15:02 crc kubenswrapper[4813]: I0217 09:15:02.907097 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6de549-5c82-4bd1-a207-e1cbc8724493-config-volume\") pod \"0a6de549-5c82-4bd1-a207-e1cbc8724493\" (UID: \"0a6de549-5c82-4bd1-a207-e1cbc8724493\") " Feb 17 09:15:02 crc kubenswrapper[4813]: I0217 09:15:02.907607 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a6de549-5c82-4bd1-a207-e1cbc8724493-config-volume" (OuterVolumeSpecName: "config-volume") pod "0a6de549-5c82-4bd1-a207-e1cbc8724493" (UID: "0a6de549-5c82-4bd1-a207-e1cbc8724493"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 09:15:02 crc kubenswrapper[4813]: I0217 09:15:02.912182 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a6de549-5c82-4bd1-a207-e1cbc8724493-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0a6de549-5c82-4bd1-a207-e1cbc8724493" (UID: "0a6de549-5c82-4bd1-a207-e1cbc8724493"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 09:15:02 crc kubenswrapper[4813]: I0217 09:15:02.912375 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a6de549-5c82-4bd1-a207-e1cbc8724493-kube-api-access-hrd4z" (OuterVolumeSpecName: "kube-api-access-hrd4z") pod "0a6de549-5c82-4bd1-a207-e1cbc8724493" (UID: "0a6de549-5c82-4bd1-a207-e1cbc8724493"). InnerVolumeSpecName "kube-api-access-hrd4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:15:02 crc kubenswrapper[4813]: I0217 09:15:02.915494 4813 scope.go:117] "RemoveContainer" containerID="4b99b785307454d1cd80c6a84f290f81c5e983866c0640308bccbe3c079c31f6" Feb 17 09:15:03 crc kubenswrapper[4813]: I0217 09:15:03.008647 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a6de549-5c82-4bd1-a207-e1cbc8724493-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 09:15:03 crc kubenswrapper[4813]: I0217 09:15:03.008672 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6de549-5c82-4bd1-a207-e1cbc8724493-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 09:15:03 crc kubenswrapper[4813]: I0217 09:15:03.008681 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrd4z\" (UniqueName: \"kubernetes.io/projected/0a6de549-5c82-4bd1-a207-e1cbc8724493-kube-api-access-hrd4z\") on node \"crc\" DevicePath \"\"" Feb 17 09:15:03 crc kubenswrapper[4813]: I0217 09:15:03.419850 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw" event={"ID":"0a6de549-5c82-4bd1-a207-e1cbc8724493","Type":"ContainerDied","Data":"4d0d21d742878363fc116204f8660bc0ade0c1e404739ce1573f06a576821266"} Feb 17 09:15:03 crc kubenswrapper[4813]: I0217 09:15:03.419901 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d0d21d742878363fc116204f8660bc0ade0c1e404739ce1573f06a576821266" Feb 17 09:15:03 crc kubenswrapper[4813]: I0217 09:15:03.419907 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521995-db5hw" Feb 17 09:15:03 crc kubenswrapper[4813]: I0217 09:15:03.900109 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc"] Feb 17 09:15:03 crc kubenswrapper[4813]: I0217 09:15:03.906542 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521950-jtvxc"] Feb 17 09:15:05 crc kubenswrapper[4813]: I0217 09:15:05.142279 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a696d4-3301-4fd5-9d70-efa790fbce35" path="/var/lib/kubelet/pods/b0a696d4-3301-4fd5-9d70-efa790fbce35/volumes" Feb 17 09:15:14 crc kubenswrapper[4813]: I0217 09:15:14.237069 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_1fbc11e5-9828-4421-bd30-bbf16df6be8f/init-config-reloader/0.log" Feb 17 09:15:14 crc kubenswrapper[4813]: I0217 09:15:14.480004 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_1fbc11e5-9828-4421-bd30-bbf16df6be8f/alertmanager/0.log" Feb 17 09:15:14 crc kubenswrapper[4813]: I0217 09:15:14.485655 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_1fbc11e5-9828-4421-bd30-bbf16df6be8f/init-config-reloader/0.log" Feb 17 09:15:14 crc kubenswrapper[4813]: I0217 09:15:14.580020 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_1fbc11e5-9828-4421-bd30-bbf16df6be8f/config-reloader/0.log" Feb 17 09:15:14 crc kubenswrapper[4813]: I0217 09:15:14.654988 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_6e499d50-6243-48a3-a11c-e264b891a21d/ceilometer-central-agent/0.log" Feb 17 09:15:14 crc kubenswrapper[4813]: I0217 09:15:14.763762 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_6e499d50-6243-48a3-a11c-e264b891a21d/ceilometer-notification-agent/0.log" Feb 17 09:15:14 crc kubenswrapper[4813]: I0217 09:15:14.775614 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_6e499d50-6243-48a3-a11c-e264b891a21d/proxy-httpd/0.log" Feb 17 09:15:14 crc kubenswrapper[4813]: I0217 09:15:14.929960 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_6e499d50-6243-48a3-a11c-e264b891a21d/sg-core/0.log" Feb 17 09:15:15 crc kubenswrapper[4813]: I0217 09:15:15.143209 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-cron-29521981-fxkjh_88bf85eb-632d-49ec-a4be-f2724e69ae9a/keystone-cron/0.log" Feb 17 09:15:15 crc kubenswrapper[4813]: I0217 09:15:15.374152 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-ffc69f97c-tr2zk_f6f5f623-f902-4898-8ad7-15ae7d031197/keystone-api/0.log" Feb 17 09:15:15 crc kubenswrapper[4813]: I0217 09:15:15.584217 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_kube-state-metrics-0_41fdf3a6-6845-4466-b703-5acec8528f28/kube-state-metrics/0.log" Feb 17 09:15:15 crc kubenswrapper[4813]: I0217 09:15:15.795210 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_e3a68bb5-cac8-4f13-892a-272304837676/mysql-bootstrap/0.log" Feb 17 09:15:16 crc kubenswrapper[4813]: I0217 09:15:16.037561 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_e3a68bb5-cac8-4f13-892a-272304837676/galera/0.log" Feb 17 09:15:16 crc kubenswrapper[4813]: I0217 09:15:16.104193 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_e3a68bb5-cac8-4f13-892a-272304837676/mysql-bootstrap/0.log" Feb 17 09:15:16 crc kubenswrapper[4813]: I0217 09:15:16.269072 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstackclient_7e915b03-8f70-4532-87be-7bdc51d20ae5/openstackclient/0.log" Feb 17 09:15:16 crc kubenswrapper[4813]: I0217 09:15:16.376704 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_1335e70a-64d4-40c2-9c87-ec4dfdbddfcf/init-config-reloader/0.log" Feb 17 09:15:16 crc kubenswrapper[4813]: I0217 09:15:16.573421 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_1335e70a-64d4-40c2-9c87-ec4dfdbddfcf/config-reloader/0.log" Feb 17 09:15:16 crc kubenswrapper[4813]: I0217 09:15:16.648057 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_1335e70a-64d4-40c2-9c87-ec4dfdbddfcf/prometheus/0.log" Feb 17 09:15:16 crc kubenswrapper[4813]: I0217 09:15:16.658960 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_1335e70a-64d4-40c2-9c87-ec4dfdbddfcf/init-config-reloader/0.log" Feb 17 09:15:16 crc kubenswrapper[4813]: I0217 09:15:16.892421 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_1335e70a-64d4-40c2-9c87-ec4dfdbddfcf/thanos-sidecar/0.log" Feb 17 09:15:17 crc kubenswrapper[4813]: I0217 09:15:17.039382 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_a9a8ae31-c620-49c7-9752-6b045300e16a/setup-container/0.log" Feb 17 09:15:17 crc kubenswrapper[4813]: I0217 09:15:17.135176 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_a9a8ae31-c620-49c7-9752-6b045300e16a/setup-container/0.log" Feb 17 09:15:17 crc kubenswrapper[4813]: I0217 09:15:17.179278 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_a9a8ae31-c620-49c7-9752-6b045300e16a/rabbitmq/0.log" Feb 17 09:15:17 crc kubenswrapper[4813]: I0217 09:15:17.411423 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_67e1e624-5191-497b-9c9a-ff9d281c0871/setup-container/0.log" Feb 17 09:15:17 crc kubenswrapper[4813]: I0217 09:15:17.690609 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_67e1e624-5191-497b-9c9a-ff9d281c0871/setup-container/0.log" Feb 17 09:15:17 crc kubenswrapper[4813]: I0217 09:15:17.807268 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_67e1e624-5191-497b-9c9a-ff9d281c0871/rabbitmq/0.log" Feb 17 09:15:26 crc kubenswrapper[4813]: I0217 09:15:26.241455 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_memcached-0_f47de837-e205-46ce-8397-77b54ea93653/memcached/0.log" Feb 17 09:15:35 crc kubenswrapper[4813]: I0217 09:15:35.562724 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx_63d43bda-eb95-4074-8415-8e7196bd950e/util/0.log" Feb 17 09:15:35 crc kubenswrapper[4813]: I0217 09:15:35.767872 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx_63d43bda-eb95-4074-8415-8e7196bd950e/util/0.log" Feb 17 09:15:35 crc kubenswrapper[4813]: I0217 09:15:35.815167 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx_63d43bda-eb95-4074-8415-8e7196bd950e/pull/0.log" Feb 17 09:15:35 crc kubenswrapper[4813]: I0217 09:15:35.834589 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx_63d43bda-eb95-4074-8415-8e7196bd950e/pull/0.log" Feb 17 09:15:35 crc kubenswrapper[4813]: I0217 09:15:35.963192 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx_63d43bda-eb95-4074-8415-8e7196bd950e/util/0.log" Feb 17 09:15:36 crc kubenswrapper[4813]: I0217 09:15:36.006364 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx_63d43bda-eb95-4074-8415-8e7196bd950e/extract/0.log" Feb 17 09:15:36 crc kubenswrapper[4813]: I0217 09:15:36.049722 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m9nkx_63d43bda-eb95-4074-8415-8e7196bd950e/pull/0.log" Feb 17 09:15:36 crc kubenswrapper[4813]: I0217 09:15:36.152911 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs_b3262ba6-e759-4186-8461-29bda3c97987/util/0.log" Feb 17 09:15:36 crc kubenswrapper[4813]: I0217 09:15:36.304450 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs_b3262ba6-e759-4186-8461-29bda3c97987/util/0.log" Feb 17 09:15:36 crc kubenswrapper[4813]: I0217 09:15:36.335808 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs_b3262ba6-e759-4186-8461-29bda3c97987/pull/0.log" Feb 17 09:15:36 crc kubenswrapper[4813]: I0217 09:15:36.401894 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs_b3262ba6-e759-4186-8461-29bda3c97987/pull/0.log" Feb 17 09:15:36 crc kubenswrapper[4813]: I0217 09:15:36.564036 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs_b3262ba6-e759-4186-8461-29bda3c97987/extract/0.log" Feb 17 09:15:36 crc kubenswrapper[4813]: I0217 09:15:36.578477 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs_b3262ba6-e759-4186-8461-29bda3c97987/util/0.log" Feb 17 09:15:36 crc kubenswrapper[4813]: I0217 09:15:36.579154 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08zpwzs_b3262ba6-e759-4186-8461-29bda3c97987/pull/0.log" Feb 17 09:15:36 crc kubenswrapper[4813]: I0217 09:15:36.742337 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr_45316cad-79c7-4ecb-801e-25dbf1c5d213/util/0.log" Feb 17 09:15:36 crc kubenswrapper[4813]: I0217 09:15:36.954428 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr_45316cad-79c7-4ecb-801e-25dbf1c5d213/pull/0.log" Feb 17 09:15:36 crc kubenswrapper[4813]: I0217 09:15:36.958663 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr_45316cad-79c7-4ecb-801e-25dbf1c5d213/util/0.log" Feb 17 09:15:36 crc kubenswrapper[4813]: I0217 09:15:36.974957 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr_45316cad-79c7-4ecb-801e-25dbf1c5d213/pull/0.log" Feb 17 09:15:37 crc kubenswrapper[4813]: I0217 09:15:37.136640 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr_45316cad-79c7-4ecb-801e-25dbf1c5d213/pull/0.log" Feb 17 09:15:37 crc kubenswrapper[4813]: I0217 09:15:37.148048 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr_45316cad-79c7-4ecb-801e-25dbf1c5d213/util/0.log" Feb 17 09:15:37 crc kubenswrapper[4813]: I0217 09:15:37.191087 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213jsnmr_45316cad-79c7-4ecb-801e-25dbf1c5d213/extract/0.log" Feb 17 09:15:37 crc kubenswrapper[4813]: I0217 09:15:37.335859 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lfq75_ed4df304-511d-49cc-a151-68139db654e0/extract-utilities/0.log" Feb 17 09:15:37 crc kubenswrapper[4813]: I0217 09:15:37.518996 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lfq75_ed4df304-511d-49cc-a151-68139db654e0/extract-content/0.log" Feb 17 09:15:37 crc kubenswrapper[4813]: I0217 09:15:37.571506 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lfq75_ed4df304-511d-49cc-a151-68139db654e0/extract-content/0.log" Feb 17 09:15:37 crc kubenswrapper[4813]: I0217 09:15:37.586511 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lfq75_ed4df304-511d-49cc-a151-68139db654e0/extract-utilities/0.log" Feb 17 09:15:37 crc kubenswrapper[4813]: I0217 09:15:37.689483 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lfq75_ed4df304-511d-49cc-a151-68139db654e0/extract-utilities/0.log" Feb 17 09:15:37 crc kubenswrapper[4813]: I0217 09:15:37.727598 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lfq75_ed4df304-511d-49cc-a151-68139db654e0/extract-content/0.log" Feb 17 09:15:37 crc kubenswrapper[4813]: I0217 09:15:37.923718 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hklb7_e81422ea-63bb-4de1-b63f-7f3c9b26b5c7/extract-utilities/0.log" Feb 17 09:15:38 crc kubenswrapper[4813]: I0217 09:15:38.046735 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lfq75_ed4df304-511d-49cc-a151-68139db654e0/registry-server/0.log" Feb 17 09:15:38 crc kubenswrapper[4813]: I0217 09:15:38.349838 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hklb7_e81422ea-63bb-4de1-b63f-7f3c9b26b5c7/extract-utilities/0.log" Feb 17 09:15:38 crc kubenswrapper[4813]: I0217 09:15:38.363557 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hklb7_e81422ea-63bb-4de1-b63f-7f3c9b26b5c7/extract-content/0.log" Feb 17 09:15:38 crc kubenswrapper[4813]: I0217 09:15:38.393557 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hklb7_e81422ea-63bb-4de1-b63f-7f3c9b26b5c7/extract-content/0.log" Feb 17 09:15:38 crc kubenswrapper[4813]: I0217 09:15:38.580385 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hklb7_e81422ea-63bb-4de1-b63f-7f3c9b26b5c7/extract-utilities/0.log" Feb 17 09:15:38 crc kubenswrapper[4813]: I0217 09:15:38.605492 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hklb7_e81422ea-63bb-4de1-b63f-7f3c9b26b5c7/extract-content/0.log" Feb 17 09:15:38 crc kubenswrapper[4813]: I0217 09:15:38.827591 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6_c87b37ec-5473-4039-8dfa-d695c5f95a3e/util/0.log" Feb 17 09:15:38 crc kubenswrapper[4813]: I0217 09:15:38.860186 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hklb7_e81422ea-63bb-4de1-b63f-7f3c9b26b5c7/registry-server/0.log" Feb 17 09:15:39 crc kubenswrapper[4813]: I0217 09:15:39.033691 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6_c87b37ec-5473-4039-8dfa-d695c5f95a3e/pull/0.log" Feb 17 09:15:39 crc kubenswrapper[4813]: I0217 09:15:39.043175 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6_c87b37ec-5473-4039-8dfa-d695c5f95a3e/util/0.log" Feb 17 09:15:39 crc kubenswrapper[4813]: I0217 09:15:39.056527 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6_c87b37ec-5473-4039-8dfa-d695c5f95a3e/pull/0.log" Feb 17 09:15:39 crc kubenswrapper[4813]: I0217 09:15:39.216139 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6_c87b37ec-5473-4039-8dfa-d695c5f95a3e/util/0.log" Feb 17 09:15:39 crc kubenswrapper[4813]: I0217 09:15:39.239426 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6_c87b37ec-5473-4039-8dfa-d695c5f95a3e/pull/0.log" Feb 17 09:15:39 crc kubenswrapper[4813]: I0217 09:15:39.291642 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca22tk6_c87b37ec-5473-4039-8dfa-d695c5f95a3e/extract/0.log" Feb 17 09:15:39 crc kubenswrapper[4813]: I0217 09:15:39.436569 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8lt56_ede3523a-8348-4d9f-871d-ba4c36857ac4/marketplace-operator/0.log" Feb 17 09:15:39 crc kubenswrapper[4813]: I0217 09:15:39.510196 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lzjh7_787772d1-3a83-410d-825e-c63219fb80ec/extract-utilities/0.log" Feb 17 09:15:39 crc kubenswrapper[4813]: I0217 09:15:39.602617 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lzjh7_787772d1-3a83-410d-825e-c63219fb80ec/extract-utilities/0.log" Feb 17 09:15:39 crc kubenswrapper[4813]: I0217 09:15:39.606790 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lzjh7_787772d1-3a83-410d-825e-c63219fb80ec/extract-content/0.log" Feb 17 09:15:39 crc kubenswrapper[4813]: I0217 09:15:39.657766 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lzjh7_787772d1-3a83-410d-825e-c63219fb80ec/extract-content/0.log" Feb 17 09:15:39 crc kubenswrapper[4813]: I0217 09:15:39.800035 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lzjh7_787772d1-3a83-410d-825e-c63219fb80ec/extract-utilities/0.log" Feb 17 09:15:39 crc kubenswrapper[4813]: I0217 09:15:39.846996 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lzjh7_787772d1-3a83-410d-825e-c63219fb80ec/extract-content/0.log" Feb 17 09:15:39 crc kubenswrapper[4813]: I0217 09:15:39.900518 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q2wwb_7a3ec35f-1008-41ec-842d-7d381d01ef12/extract-utilities/0.log" Feb 17 09:15:39 crc kubenswrapper[4813]: I0217 09:15:39.917195 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lzjh7_787772d1-3a83-410d-825e-c63219fb80ec/registry-server/0.log" Feb 17 09:15:40 crc kubenswrapper[4813]: I0217 09:15:40.072510 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q2wwb_7a3ec35f-1008-41ec-842d-7d381d01ef12/extract-content/0.log" Feb 17 09:15:40 crc kubenswrapper[4813]: I0217 09:15:40.078895 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q2wwb_7a3ec35f-1008-41ec-842d-7d381d01ef12/extract-content/0.log" Feb 17 09:15:40 crc kubenswrapper[4813]: I0217 09:15:40.086433 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q2wwb_7a3ec35f-1008-41ec-842d-7d381d01ef12/extract-utilities/0.log" Feb 17 09:15:40 crc kubenswrapper[4813]: I0217 09:15:40.300357 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q2wwb_7a3ec35f-1008-41ec-842d-7d381d01ef12/extract-utilities/0.log" Feb 17 09:15:40 crc kubenswrapper[4813]: I0217 09:15:40.312620 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q2wwb_7a3ec35f-1008-41ec-842d-7d381d01ef12/extract-content/0.log" Feb 17 09:15:40 crc kubenswrapper[4813]: I0217 09:15:40.546741 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q2wwb_7a3ec35f-1008-41ec-842d-7d381d01ef12/registry-server/0.log" Feb 17 09:15:54 crc kubenswrapper[4813]: I0217 09:15:54.931399 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-jzdcd_8c42bb3e-30f2-484f-98d6-cc3d6209897a/prometheus-operator/0.log" Feb 17 09:15:54 crc kubenswrapper[4813]: I0217 09:15:54.979683 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5667669b-55qdl_3614da2d-8f3c-41bd-a31c-9d7fdef31fad/prometheus-operator-admission-webhook/0.log" Feb 17 09:15:55 crc kubenswrapper[4813]: I0217 09:15:55.047561 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5667669b-gvchv_898154a7-5f53-4b78-bd75-4c62b2e6cae1/prometheus-operator-admission-webhook/0.log" Feb 17 09:15:55 crc kubenswrapper[4813]: I0217 09:15:55.171267 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-629tg_05000f8d-0078-48a4-a118-e184c008b5d4/operator/0.log" Feb 17 09:15:55 crc kubenswrapper[4813]: I0217 09:15:55.219407 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-gh8k5_123ad867-ebe9-4ea6-acfe-82f25010549e/observability-ui-dashboards/0.log" Feb 17 09:15:55 crc kubenswrapper[4813]: I0217 09:15:55.274969 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-5xprj_c76875de-7ea3-431b-882a-e12415659320/perses-operator/0.log" Feb 17 09:16:03 crc kubenswrapper[4813]: I0217 09:16:03.061871 4813 scope.go:117] "RemoveContainer" containerID="e30970253b955ebe3d11d66b9b9e2fcd37d7ef0c04fd9985c926c338de6bc654" Feb 17 09:16:03 crc kubenswrapper[4813]: I0217 09:16:03.097890 4813 scope.go:117] "RemoveContainer" containerID="d65ae16037d07388d59fa655f4a3a12db5fa7368befd9f9601bb36e4f0ce6236" Feb 17 09:16:03 crc kubenswrapper[4813]: I0217 09:16:03.133508 4813 scope.go:117] "RemoveContainer" containerID="3285e6e0aff4a07554ffa6b76f4593014c0168a914dc06a48ccaa32dcb5317d8" Feb 17 09:16:03 crc kubenswrapper[4813]: I0217 09:16:03.200101 4813 scope.go:117] "RemoveContainer" containerID="af0a0c08b4b3368bbbf3ece7c942ef142d3c40ddd2b281bd1e7eb1418da1cd80" Feb 17 09:16:03 crc kubenswrapper[4813]: I0217 09:16:03.214684 4813 scope.go:117] "RemoveContainer" containerID="68568d543f23cc857e8587ee26f434a66a9e26c00cfbf3514d39704756d6b9b8" Feb 17 09:16:35 crc kubenswrapper[4813]: I0217 09:16:35.165782 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 09:16:35 crc kubenswrapper[4813]: I0217 09:16:35.166376 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 09:17:03 crc kubenswrapper[4813]: I0217 09:17:03.304555 4813 scope.go:117] "RemoveContainer" containerID="da8582cc36a23d8436bd804d1ea620cad0761d76d9d55fbecb5a7848ec6ec8ca" Feb 17 09:17:03 crc kubenswrapper[4813]: I0217 09:17:03.373371 4813 scope.go:117] "RemoveContainer" containerID="3cef7a096374e68894fcbfda9d1ad13de4be936e3fe232da5e40a4cfc32cf219" Feb 17 09:17:03 crc kubenswrapper[4813]: I0217 09:17:03.409456 4813 scope.go:117] "RemoveContainer" containerID="2e77aedca53d58f712e0dfc24784cd0e9d2a2c746518db068ebbf91e849174a8" Feb 17 09:17:03 crc kubenswrapper[4813]: I0217 09:17:03.449730 4813 scope.go:117] "RemoveContainer" containerID="c03871266e4a32a1009bdf365eca13d4791746c307a65c4a8aa14fc2e4d2d614" Feb 17 09:17:03 crc kubenswrapper[4813]: I0217 09:17:03.495704 4813 scope.go:117] "RemoveContainer" containerID="d0f53a3b7998a32fa9721e65baa08a7aa6e2198a10d40c14e8ffd5cc485a20a8" Feb 17 09:17:03 crc kubenswrapper[4813]: I0217 09:17:03.541476 4813 scope.go:117] "RemoveContainer" containerID="871acd3e2a2905b85697d728b6c800260b473d6e2c132ddd69180d24f03b974f" Feb 17 09:17:05 crc kubenswrapper[4813]: I0217 09:17:05.165206 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 09:17:05 crc kubenswrapper[4813]: I0217 09:17:05.165468 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 09:17:05 crc kubenswrapper[4813]: I0217 09:17:05.461004 4813 generic.go:334] "Generic (PLEG): container finished" podID="6a6ce433-9576-45df-8e22-e935702d27c1" containerID="7820b8f3fd2deba9b30f7345fbcfa715af4f165ae7e90a0b13548725d4c5ac0d" exitCode=0 Feb 17 09:17:05 crc kubenswrapper[4813]: I0217 09:17:05.461053 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6lrld/must-gather-hw7jb" event={"ID":"6a6ce433-9576-45df-8e22-e935702d27c1","Type":"ContainerDied","Data":"7820b8f3fd2deba9b30f7345fbcfa715af4f165ae7e90a0b13548725d4c5ac0d"} Feb 17 09:17:05 crc kubenswrapper[4813]: I0217 09:17:05.461926 4813 scope.go:117] "RemoveContainer" containerID="7820b8f3fd2deba9b30f7345fbcfa715af4f165ae7e90a0b13548725d4c5ac0d" Feb 17 09:17:05 crc kubenswrapper[4813]: I0217 09:17:05.941021 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6lrld_must-gather-hw7jb_6a6ce433-9576-45df-8e22-e935702d27c1/gather/0.log" Feb 17 09:17:07 crc kubenswrapper[4813]: I0217 09:17:07.586161 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4c758"] Feb 17 09:17:07 crc kubenswrapper[4813]: E0217 09:17:07.587375 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a6de549-5c82-4bd1-a207-e1cbc8724493" containerName="collect-profiles" Feb 17 09:17:07 crc kubenswrapper[4813]: I0217 09:17:07.587436 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a6de549-5c82-4bd1-a207-e1cbc8724493" containerName="collect-profiles" Feb 17 09:17:07 crc kubenswrapper[4813]: I0217 09:17:07.587970 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a6de549-5c82-4bd1-a207-e1cbc8724493" containerName="collect-profiles" Feb 17 09:17:07 crc kubenswrapper[4813]: I0217 09:17:07.590416 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:07 crc kubenswrapper[4813]: I0217 09:17:07.607097 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4c758"] Feb 17 09:17:07 crc kubenswrapper[4813]: I0217 09:17:07.687732 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2441bd1-a371-4119-847e-5e2e61dbf87e-catalog-content\") pod \"certified-operators-4c758\" (UID: \"b2441bd1-a371-4119-847e-5e2e61dbf87e\") " pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:07 crc kubenswrapper[4813]: I0217 09:17:07.687850 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2441bd1-a371-4119-847e-5e2e61dbf87e-utilities\") pod \"certified-operators-4c758\" (UID: \"b2441bd1-a371-4119-847e-5e2e61dbf87e\") " pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:07 crc kubenswrapper[4813]: I0217 09:17:07.687966 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk7f6\" (UniqueName: \"kubernetes.io/projected/b2441bd1-a371-4119-847e-5e2e61dbf87e-kube-api-access-zk7f6\") pod \"certified-operators-4c758\" (UID: \"b2441bd1-a371-4119-847e-5e2e61dbf87e\") " pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:07 crc kubenswrapper[4813]: I0217 09:17:07.789641 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2441bd1-a371-4119-847e-5e2e61dbf87e-catalog-content\") pod \"certified-operators-4c758\" (UID: \"b2441bd1-a371-4119-847e-5e2e61dbf87e\") " pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:07 crc kubenswrapper[4813]: I0217 09:17:07.789730 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2441bd1-a371-4119-847e-5e2e61dbf87e-utilities\") pod \"certified-operators-4c758\" (UID: \"b2441bd1-a371-4119-847e-5e2e61dbf87e\") " pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:07 crc kubenswrapper[4813]: I0217 09:17:07.789778 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk7f6\" (UniqueName: \"kubernetes.io/projected/b2441bd1-a371-4119-847e-5e2e61dbf87e-kube-api-access-zk7f6\") pod \"certified-operators-4c758\" (UID: \"b2441bd1-a371-4119-847e-5e2e61dbf87e\") " pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:07 crc kubenswrapper[4813]: I0217 09:17:07.790104 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2441bd1-a371-4119-847e-5e2e61dbf87e-catalog-content\") pod \"certified-operators-4c758\" (UID: \"b2441bd1-a371-4119-847e-5e2e61dbf87e\") " pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:07 crc kubenswrapper[4813]: I0217 09:17:07.790176 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2441bd1-a371-4119-847e-5e2e61dbf87e-utilities\") pod \"certified-operators-4c758\" (UID: \"b2441bd1-a371-4119-847e-5e2e61dbf87e\") " pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:07 crc kubenswrapper[4813]: I0217 09:17:07.810554 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk7f6\" (UniqueName: \"kubernetes.io/projected/b2441bd1-a371-4119-847e-5e2e61dbf87e-kube-api-access-zk7f6\") pod \"certified-operators-4c758\" (UID: \"b2441bd1-a371-4119-847e-5e2e61dbf87e\") " pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:07 crc kubenswrapper[4813]: I0217 09:17:07.909830 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:08 crc kubenswrapper[4813]: I0217 09:17:08.257404 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4c758"] Feb 17 09:17:08 crc kubenswrapper[4813]: I0217 09:17:08.487688 4813 generic.go:334] "Generic (PLEG): container finished" podID="b2441bd1-a371-4119-847e-5e2e61dbf87e" containerID="2639b3bebdd80c3affbd2312a826bc2fe6c3107b911e17366a200f70ab4afed8" exitCode=0 Feb 17 09:17:08 crc kubenswrapper[4813]: I0217 09:17:08.487732 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c758" event={"ID":"b2441bd1-a371-4119-847e-5e2e61dbf87e","Type":"ContainerDied","Data":"2639b3bebdd80c3affbd2312a826bc2fe6c3107b911e17366a200f70ab4afed8"} Feb 17 09:17:08 crc kubenswrapper[4813]: I0217 09:17:08.487912 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c758" event={"ID":"b2441bd1-a371-4119-847e-5e2e61dbf87e","Type":"ContainerStarted","Data":"ad85e416d307e23dcb6b509ebd2eb7d81bcabd29186677f0b7346161c2e9d8c8"} Feb 17 09:17:08 crc kubenswrapper[4813]: I0217 09:17:08.489170 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 09:17:09 crc kubenswrapper[4813]: I0217 09:17:09.499649 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c758" event={"ID":"b2441bd1-a371-4119-847e-5e2e61dbf87e","Type":"ContainerStarted","Data":"c7b61a9663e8a419860573136d70f60ff7ebe103c7aa13c14ad3365ff7e0928f"} Feb 17 09:17:10 crc kubenswrapper[4813]: I0217 09:17:10.516439 4813 generic.go:334] "Generic (PLEG): container finished" podID="b2441bd1-a371-4119-847e-5e2e61dbf87e" containerID="c7b61a9663e8a419860573136d70f60ff7ebe103c7aa13c14ad3365ff7e0928f" exitCode=0 Feb 17 09:17:10 crc kubenswrapper[4813]: I0217 09:17:10.516478 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c758" event={"ID":"b2441bd1-a371-4119-847e-5e2e61dbf87e","Type":"ContainerDied","Data":"c7b61a9663e8a419860573136d70f60ff7ebe103c7aa13c14ad3365ff7e0928f"} Feb 17 09:17:11 crc kubenswrapper[4813]: I0217 09:17:11.530964 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c758" event={"ID":"b2441bd1-a371-4119-847e-5e2e61dbf87e","Type":"ContainerStarted","Data":"b8e3b2bfa7514a5614c7a22b2241adc92435ef3ff7875b6802acf0a740fd142e"} Feb 17 09:17:13 crc kubenswrapper[4813]: I0217 09:17:13.814455 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4c758" podStartSLOduration=4.339576228 podStartE2EDuration="6.814435655s" podCreationTimestamp="2026-02-17 09:17:07 +0000 UTC" firstStartedPulling="2026-02-17 09:17:08.488949962 +0000 UTC m=+2176.149711185" lastFinishedPulling="2026-02-17 09:17:10.963809389 +0000 UTC m=+2178.624570612" observedRunningTime="2026-02-17 09:17:11.561844914 +0000 UTC m=+2179.222606157" watchObservedRunningTime="2026-02-17 09:17:13.814435655 +0000 UTC m=+2181.475196898" Feb 17 09:17:13 crc kubenswrapper[4813]: I0217 09:17:13.822233 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6lrld/must-gather-hw7jb"] Feb 17 09:17:13 crc kubenswrapper[4813]: I0217 09:17:13.822565 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6lrld/must-gather-hw7jb" podUID="6a6ce433-9576-45df-8e22-e935702d27c1" containerName="copy" containerID="cri-o://a94e083d5e381b0418cfa53937ae64608e56a09bfa347ad92b56ed85b4fd6be4" gracePeriod=2 Feb 17 09:17:13 crc kubenswrapper[4813]: I0217 09:17:13.834988 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6lrld/must-gather-hw7jb"] Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.246254 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6lrld_must-gather-hw7jb_6a6ce433-9576-45df-8e22-e935702d27c1/copy/0.log" Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.246836 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lrld/must-gather-hw7jb" Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.303474 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n47ch\" (UniqueName: \"kubernetes.io/projected/6a6ce433-9576-45df-8e22-e935702d27c1-kube-api-access-n47ch\") pod \"6a6ce433-9576-45df-8e22-e935702d27c1\" (UID: \"6a6ce433-9576-45df-8e22-e935702d27c1\") " Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.303620 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6a6ce433-9576-45df-8e22-e935702d27c1-must-gather-output\") pod \"6a6ce433-9576-45df-8e22-e935702d27c1\" (UID: \"6a6ce433-9576-45df-8e22-e935702d27c1\") " Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.310553 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a6ce433-9576-45df-8e22-e935702d27c1-kube-api-access-n47ch" (OuterVolumeSpecName: "kube-api-access-n47ch") pod "6a6ce433-9576-45df-8e22-e935702d27c1" (UID: "6a6ce433-9576-45df-8e22-e935702d27c1"). InnerVolumeSpecName "kube-api-access-n47ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.405368 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n47ch\" (UniqueName: \"kubernetes.io/projected/6a6ce433-9576-45df-8e22-e935702d27c1-kube-api-access-n47ch\") on node \"crc\" DevicePath \"\"" Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.415920 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a6ce433-9576-45df-8e22-e935702d27c1-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6a6ce433-9576-45df-8e22-e935702d27c1" (UID: "6a6ce433-9576-45df-8e22-e935702d27c1"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.506388 4813 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6a6ce433-9576-45df-8e22-e935702d27c1-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.553778 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6lrld_must-gather-hw7jb_6a6ce433-9576-45df-8e22-e935702d27c1/copy/0.log" Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.554278 4813 generic.go:334] "Generic (PLEG): container finished" podID="6a6ce433-9576-45df-8e22-e935702d27c1" containerID="a94e083d5e381b0418cfa53937ae64608e56a09bfa347ad92b56ed85b4fd6be4" exitCode=143 Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.554377 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6lrld/must-gather-hw7jb" Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.554417 4813 scope.go:117] "RemoveContainer" containerID="a94e083d5e381b0418cfa53937ae64608e56a09bfa347ad92b56ed85b4fd6be4" Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.581673 4813 scope.go:117] "RemoveContainer" containerID="7820b8f3fd2deba9b30f7345fbcfa715af4f165ae7e90a0b13548725d4c5ac0d" Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.659569 4813 scope.go:117] "RemoveContainer" containerID="a94e083d5e381b0418cfa53937ae64608e56a09bfa347ad92b56ed85b4fd6be4" Feb 17 09:17:14 crc kubenswrapper[4813]: E0217 09:17:14.663490 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a94e083d5e381b0418cfa53937ae64608e56a09bfa347ad92b56ed85b4fd6be4\": container with ID starting with a94e083d5e381b0418cfa53937ae64608e56a09bfa347ad92b56ed85b4fd6be4 not found: ID does not exist" containerID="a94e083d5e381b0418cfa53937ae64608e56a09bfa347ad92b56ed85b4fd6be4" Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.663644 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a94e083d5e381b0418cfa53937ae64608e56a09bfa347ad92b56ed85b4fd6be4"} err="failed to get container status \"a94e083d5e381b0418cfa53937ae64608e56a09bfa347ad92b56ed85b4fd6be4\": rpc error: code = NotFound desc = could not find container \"a94e083d5e381b0418cfa53937ae64608e56a09bfa347ad92b56ed85b4fd6be4\": container with ID starting with a94e083d5e381b0418cfa53937ae64608e56a09bfa347ad92b56ed85b4fd6be4 not found: ID does not exist" Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.663733 4813 scope.go:117] "RemoveContainer" containerID="7820b8f3fd2deba9b30f7345fbcfa715af4f165ae7e90a0b13548725d4c5ac0d" Feb 17 09:17:14 crc kubenswrapper[4813]: E0217 09:17:14.664206 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7820b8f3fd2deba9b30f7345fbcfa715af4f165ae7e90a0b13548725d4c5ac0d\": container with ID starting with 7820b8f3fd2deba9b30f7345fbcfa715af4f165ae7e90a0b13548725d4c5ac0d not found: ID does not exist" containerID="7820b8f3fd2deba9b30f7345fbcfa715af4f165ae7e90a0b13548725d4c5ac0d" Feb 17 09:17:14 crc kubenswrapper[4813]: I0217 09:17:14.664246 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7820b8f3fd2deba9b30f7345fbcfa715af4f165ae7e90a0b13548725d4c5ac0d"} err="failed to get container status \"7820b8f3fd2deba9b30f7345fbcfa715af4f165ae7e90a0b13548725d4c5ac0d\": rpc error: code = NotFound desc = could not find container \"7820b8f3fd2deba9b30f7345fbcfa715af4f165ae7e90a0b13548725d4c5ac0d\": container with ID starting with 7820b8f3fd2deba9b30f7345fbcfa715af4f165ae7e90a0b13548725d4c5ac0d not found: ID does not exist" Feb 17 09:17:15 crc kubenswrapper[4813]: I0217 09:17:15.121824 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a6ce433-9576-45df-8e22-e935702d27c1" path="/var/lib/kubelet/pods/6a6ce433-9576-45df-8e22-e935702d27c1/volumes" Feb 17 09:17:17 crc kubenswrapper[4813]: I0217 09:17:17.910273 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:17 crc kubenswrapper[4813]: I0217 09:17:17.910601 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:17 crc kubenswrapper[4813]: I0217 09:17:17.983943 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:18 crc kubenswrapper[4813]: I0217 09:17:18.645247 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:21 crc kubenswrapper[4813]: I0217 09:17:21.568192 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4c758"] Feb 17 09:17:21 crc kubenswrapper[4813]: I0217 09:17:21.568729 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4c758" podUID="b2441bd1-a371-4119-847e-5e2e61dbf87e" containerName="registry-server" containerID="cri-o://b8e3b2bfa7514a5614c7a22b2241adc92435ef3ff7875b6802acf0a740fd142e" gracePeriod=2 Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.093130 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.220845 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk7f6\" (UniqueName: \"kubernetes.io/projected/b2441bd1-a371-4119-847e-5e2e61dbf87e-kube-api-access-zk7f6\") pod \"b2441bd1-a371-4119-847e-5e2e61dbf87e\" (UID: \"b2441bd1-a371-4119-847e-5e2e61dbf87e\") " Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.220972 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2441bd1-a371-4119-847e-5e2e61dbf87e-utilities\") pod \"b2441bd1-a371-4119-847e-5e2e61dbf87e\" (UID: \"b2441bd1-a371-4119-847e-5e2e61dbf87e\") " Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.221007 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2441bd1-a371-4119-847e-5e2e61dbf87e-catalog-content\") pod \"b2441bd1-a371-4119-847e-5e2e61dbf87e\" (UID: \"b2441bd1-a371-4119-847e-5e2e61dbf87e\") " Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.222952 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2441bd1-a371-4119-847e-5e2e61dbf87e-utilities" (OuterVolumeSpecName: "utilities") pod "b2441bd1-a371-4119-847e-5e2e61dbf87e" (UID: "b2441bd1-a371-4119-847e-5e2e61dbf87e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.229408 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2441bd1-a371-4119-847e-5e2e61dbf87e-kube-api-access-zk7f6" (OuterVolumeSpecName: "kube-api-access-zk7f6") pod "b2441bd1-a371-4119-847e-5e2e61dbf87e" (UID: "b2441bd1-a371-4119-847e-5e2e61dbf87e"). InnerVolumeSpecName "kube-api-access-zk7f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.284117 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2441bd1-a371-4119-847e-5e2e61dbf87e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2441bd1-a371-4119-847e-5e2e61dbf87e" (UID: "b2441bd1-a371-4119-847e-5e2e61dbf87e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.324070 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2441bd1-a371-4119-847e-5e2e61dbf87e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.324127 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk7f6\" (UniqueName: \"kubernetes.io/projected/b2441bd1-a371-4119-847e-5e2e61dbf87e-kube-api-access-zk7f6\") on node \"crc\" DevicePath \"\"" Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.324151 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2441bd1-a371-4119-847e-5e2e61dbf87e-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.647574 4813 generic.go:334] "Generic (PLEG): container finished" podID="b2441bd1-a371-4119-847e-5e2e61dbf87e" containerID="b8e3b2bfa7514a5614c7a22b2241adc92435ef3ff7875b6802acf0a740fd142e" exitCode=0 Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.648837 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c758" event={"ID":"b2441bd1-a371-4119-847e-5e2e61dbf87e","Type":"ContainerDied","Data":"b8e3b2bfa7514a5614c7a22b2241adc92435ef3ff7875b6802acf0a740fd142e"} Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.648982 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c758" event={"ID":"b2441bd1-a371-4119-847e-5e2e61dbf87e","Type":"ContainerDied","Data":"ad85e416d307e23dcb6b509ebd2eb7d81bcabd29186677f0b7346161c2e9d8c8"} Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.649105 4813 scope.go:117] "RemoveContainer" containerID="b8e3b2bfa7514a5614c7a22b2241adc92435ef3ff7875b6802acf0a740fd142e" Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.649450 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4c758" Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.707729 4813 scope.go:117] "RemoveContainer" containerID="c7b61a9663e8a419860573136d70f60ff7ebe103c7aa13c14ad3365ff7e0928f" Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.724763 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4c758"] Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.731473 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4c758"] Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.769288 4813 scope.go:117] "RemoveContainer" containerID="2639b3bebdd80c3affbd2312a826bc2fe6c3107b911e17366a200f70ab4afed8" Feb 17 09:17:22 crc kubenswrapper[4813]: E0217 09:17:22.814734 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2441bd1_a371_4119_847e_5e2e61dbf87e.slice/crio-ad85e416d307e23dcb6b509ebd2eb7d81bcabd29186677f0b7346161c2e9d8c8\": RecentStats: unable to find data in memory cache]" Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.816638 4813 scope.go:117] "RemoveContainer" containerID="b8e3b2bfa7514a5614c7a22b2241adc92435ef3ff7875b6802acf0a740fd142e" Feb 17 09:17:22 crc kubenswrapper[4813]: E0217 09:17:22.818214 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8e3b2bfa7514a5614c7a22b2241adc92435ef3ff7875b6802acf0a740fd142e\": container with ID starting with b8e3b2bfa7514a5614c7a22b2241adc92435ef3ff7875b6802acf0a740fd142e not found: ID does not exist" containerID="b8e3b2bfa7514a5614c7a22b2241adc92435ef3ff7875b6802acf0a740fd142e" Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.818335 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8e3b2bfa7514a5614c7a22b2241adc92435ef3ff7875b6802acf0a740fd142e"} err="failed to get container status \"b8e3b2bfa7514a5614c7a22b2241adc92435ef3ff7875b6802acf0a740fd142e\": rpc error: code = NotFound desc = could not find container \"b8e3b2bfa7514a5614c7a22b2241adc92435ef3ff7875b6802acf0a740fd142e\": container with ID starting with b8e3b2bfa7514a5614c7a22b2241adc92435ef3ff7875b6802acf0a740fd142e not found: ID does not exist" Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.818418 4813 scope.go:117] "RemoveContainer" containerID="c7b61a9663e8a419860573136d70f60ff7ebe103c7aa13c14ad3365ff7e0928f" Feb 17 09:17:22 crc kubenswrapper[4813]: E0217 09:17:22.818848 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7b61a9663e8a419860573136d70f60ff7ebe103c7aa13c14ad3365ff7e0928f\": container with ID starting with c7b61a9663e8a419860573136d70f60ff7ebe103c7aa13c14ad3365ff7e0928f not found: ID does not exist" containerID="c7b61a9663e8a419860573136d70f60ff7ebe103c7aa13c14ad3365ff7e0928f" Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.818949 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7b61a9663e8a419860573136d70f60ff7ebe103c7aa13c14ad3365ff7e0928f"} err="failed to get container status \"c7b61a9663e8a419860573136d70f60ff7ebe103c7aa13c14ad3365ff7e0928f\": rpc error: code = NotFound desc = could not find container \"c7b61a9663e8a419860573136d70f60ff7ebe103c7aa13c14ad3365ff7e0928f\": container with ID starting with c7b61a9663e8a419860573136d70f60ff7ebe103c7aa13c14ad3365ff7e0928f not found: ID does not exist" Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.819020 4813 scope.go:117] "RemoveContainer" containerID="2639b3bebdd80c3affbd2312a826bc2fe6c3107b911e17366a200f70ab4afed8" Feb 17 09:17:22 crc kubenswrapper[4813]: E0217 09:17:22.819401 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2639b3bebdd80c3affbd2312a826bc2fe6c3107b911e17366a200f70ab4afed8\": container with ID starting with 2639b3bebdd80c3affbd2312a826bc2fe6c3107b911e17366a200f70ab4afed8 not found: ID does not exist" containerID="2639b3bebdd80c3affbd2312a826bc2fe6c3107b911e17366a200f70ab4afed8" Feb 17 09:17:22 crc kubenswrapper[4813]: I0217 09:17:22.819473 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2639b3bebdd80c3affbd2312a826bc2fe6c3107b911e17366a200f70ab4afed8"} err="failed to get container status \"2639b3bebdd80c3affbd2312a826bc2fe6c3107b911e17366a200f70ab4afed8\": rpc error: code = NotFound desc = could not find container \"2639b3bebdd80c3affbd2312a826bc2fe6c3107b911e17366a200f70ab4afed8\": container with ID starting with 2639b3bebdd80c3affbd2312a826bc2fe6c3107b911e17366a200f70ab4afed8 not found: ID does not exist" Feb 17 09:17:23 crc kubenswrapper[4813]: I0217 09:17:23.120909 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2441bd1-a371-4119-847e-5e2e61dbf87e" path="/var/lib/kubelet/pods/b2441bd1-a371-4119-847e-5e2e61dbf87e/volumes" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.183205 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cwj"] Feb 17 09:17:29 crc kubenswrapper[4813]: E0217 09:17:29.184109 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2441bd1-a371-4119-847e-5e2e61dbf87e" containerName="extract-content" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.184123 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2441bd1-a371-4119-847e-5e2e61dbf87e" containerName="extract-content" Feb 17 09:17:29 crc kubenswrapper[4813]: E0217 09:17:29.184146 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6ce433-9576-45df-8e22-e935702d27c1" containerName="copy" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.184156 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6ce433-9576-45df-8e22-e935702d27c1" containerName="copy" Feb 17 09:17:29 crc kubenswrapper[4813]: E0217 09:17:29.184169 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2441bd1-a371-4119-847e-5e2e61dbf87e" containerName="extract-utilities" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.184179 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2441bd1-a371-4119-847e-5e2e61dbf87e" containerName="extract-utilities" Feb 17 09:17:29 crc kubenswrapper[4813]: E0217 09:17:29.184204 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2441bd1-a371-4119-847e-5e2e61dbf87e" containerName="registry-server" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.184212 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2441bd1-a371-4119-847e-5e2e61dbf87e" containerName="registry-server" Feb 17 09:17:29 crc kubenswrapper[4813]: E0217 09:17:29.184227 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6ce433-9576-45df-8e22-e935702d27c1" containerName="gather" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.184235 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6ce433-9576-45df-8e22-e935702d27c1" containerName="gather" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.184442 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a6ce433-9576-45df-8e22-e935702d27c1" containerName="copy" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.184458 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2441bd1-a371-4119-847e-5e2e61dbf87e" containerName="registry-server" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.184475 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a6ce433-9576-45df-8e22-e935702d27c1" containerName="gather" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.185852 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.217792 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cwj"] Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.258696 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-utilities\") pod \"redhat-marketplace-m9cwj\" (UID: \"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f\") " pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.259009 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-catalog-content\") pod \"redhat-marketplace-m9cwj\" (UID: \"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f\") " pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.259057 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd7j9\" (UniqueName: \"kubernetes.io/projected/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-kube-api-access-fd7j9\") pod \"redhat-marketplace-m9cwj\" (UID: \"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f\") " pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.360955 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-utilities\") pod \"redhat-marketplace-m9cwj\" (UID: \"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f\") " pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.361072 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-catalog-content\") pod \"redhat-marketplace-m9cwj\" (UID: \"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f\") " pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.361117 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd7j9\" (UniqueName: \"kubernetes.io/projected/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-kube-api-access-fd7j9\") pod \"redhat-marketplace-m9cwj\" (UID: \"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f\") " pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.361586 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-catalog-content\") pod \"redhat-marketplace-m9cwj\" (UID: \"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f\") " pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.361946 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-utilities\") pod \"redhat-marketplace-m9cwj\" (UID: \"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f\") " pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.393967 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd7j9\" (UniqueName: \"kubernetes.io/projected/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-kube-api-access-fd7j9\") pod \"redhat-marketplace-m9cwj\" (UID: \"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f\") " pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:29 crc kubenswrapper[4813]: I0217 09:17:29.517419 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:30 crc kubenswrapper[4813]: I0217 09:17:30.036196 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cwj"] Feb 17 09:17:30 crc kubenswrapper[4813]: I0217 09:17:30.720814 4813 generic.go:334] "Generic (PLEG): container finished" podID="ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f" containerID="0a9313d177666763f2a10a91a3918cb5de52396dc97a1f686720c7d46ba1e765" exitCode=0 Feb 17 09:17:30 crc kubenswrapper[4813]: I0217 09:17:30.720867 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cwj" event={"ID":"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f","Type":"ContainerDied","Data":"0a9313d177666763f2a10a91a3918cb5de52396dc97a1f686720c7d46ba1e765"} Feb 17 09:17:30 crc kubenswrapper[4813]: I0217 09:17:30.721137 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cwj" event={"ID":"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f","Type":"ContainerStarted","Data":"e2cc75ca2c4c2a6f4cb64d73cec6f4d01826aba377f112ad8ad347997d3aac66"} Feb 17 09:17:31 crc kubenswrapper[4813]: I0217 09:17:31.730860 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cwj" event={"ID":"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f","Type":"ContainerStarted","Data":"36c9143587a810e5c1fab0dfcfac2d9f94a5154d5498ff01abe92acd5c7c4e76"} Feb 17 09:17:32 crc kubenswrapper[4813]: I0217 09:17:32.743349 4813 generic.go:334] "Generic (PLEG): container finished" podID="ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f" containerID="36c9143587a810e5c1fab0dfcfac2d9f94a5154d5498ff01abe92acd5c7c4e76" exitCode=0 Feb 17 09:17:32 crc kubenswrapper[4813]: I0217 09:17:32.743394 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cwj" event={"ID":"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f","Type":"ContainerDied","Data":"36c9143587a810e5c1fab0dfcfac2d9f94a5154d5498ff01abe92acd5c7c4e76"} Feb 17 09:17:33 crc kubenswrapper[4813]: I0217 09:17:33.752441 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cwj" event={"ID":"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f","Type":"ContainerStarted","Data":"d8036888ad61e95434a62322ea8545eb942e18fcc47ea4716c21466689c9b3c3"} Feb 17 09:17:33 crc kubenswrapper[4813]: I0217 09:17:33.773195 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m9cwj" podStartSLOduration=2.379980311 podStartE2EDuration="4.773176894s" podCreationTimestamp="2026-02-17 09:17:29 +0000 UTC" firstStartedPulling="2026-02-17 09:17:30.722840993 +0000 UTC m=+2198.383602216" lastFinishedPulling="2026-02-17 09:17:33.116037576 +0000 UTC m=+2200.776798799" observedRunningTime="2026-02-17 09:17:33.768285545 +0000 UTC m=+2201.429046778" watchObservedRunningTime="2026-02-17 09:17:33.773176894 +0000 UTC m=+2201.433938127" Feb 17 09:17:35 crc kubenswrapper[4813]: I0217 09:17:35.165740 4813 patch_prober.go:28] interesting pod/machine-config-daemon-w2pz7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 09:17:35 crc kubenswrapper[4813]: I0217 09:17:35.166054 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 09:17:35 crc kubenswrapper[4813]: I0217 09:17:35.166102 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" Feb 17 09:17:35 crc kubenswrapper[4813]: I0217 09:17:35.166727 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c"} pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 09:17:35 crc kubenswrapper[4813]: I0217 09:17:35.166794 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" containerName="machine-config-daemon" containerID="cri-o://1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" gracePeriod=600 Feb 17 09:17:35 crc kubenswrapper[4813]: E0217 09:17:35.295737 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:17:35 crc kubenswrapper[4813]: I0217 09:17:35.775667 4813 generic.go:334] "Generic (PLEG): container finished" podID="3a6ba827-b08b-4163-b067-d9adb119398d" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" exitCode=0 Feb 17 09:17:35 crc kubenswrapper[4813]: I0217 09:17:35.775733 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" event={"ID":"3a6ba827-b08b-4163-b067-d9adb119398d","Type":"ContainerDied","Data":"1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c"} Feb 17 09:17:35 crc kubenswrapper[4813]: I0217 09:17:35.775788 4813 scope.go:117] "RemoveContainer" containerID="547bf834189b6e0ded62303fe339620a5821abad97ab1b7e1056ad7ed88e802b" Feb 17 09:17:35 crc kubenswrapper[4813]: I0217 09:17:35.776634 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:17:35 crc kubenswrapper[4813]: E0217 09:17:35.777057 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:17:39 crc kubenswrapper[4813]: I0217 09:17:39.519230 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:39 crc kubenswrapper[4813]: I0217 09:17:39.519829 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:39 crc kubenswrapper[4813]: I0217 09:17:39.581850 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:39 crc kubenswrapper[4813]: I0217 09:17:39.879448 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:43 crc kubenswrapper[4813]: I0217 09:17:43.574060 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cwj"] Feb 17 09:17:43 crc kubenswrapper[4813]: I0217 09:17:43.575007 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m9cwj" podUID="ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f" containerName="registry-server" containerID="cri-o://d8036888ad61e95434a62322ea8545eb942e18fcc47ea4716c21466689c9b3c3" gracePeriod=2 Feb 17 09:17:43 crc kubenswrapper[4813]: I0217 09:17:43.871947 4813 generic.go:334] "Generic (PLEG): container finished" podID="ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f" containerID="d8036888ad61e95434a62322ea8545eb942e18fcc47ea4716c21466689c9b3c3" exitCode=0 Feb 17 09:17:43 crc kubenswrapper[4813]: I0217 09:17:43.872079 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cwj" event={"ID":"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f","Type":"ContainerDied","Data":"d8036888ad61e95434a62322ea8545eb942e18fcc47ea4716c21466689c9b3c3"} Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.119817 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.309586 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd7j9\" (UniqueName: \"kubernetes.io/projected/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-kube-api-access-fd7j9\") pod \"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f\" (UID: \"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f\") " Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.309739 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-catalog-content\") pod \"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f\" (UID: \"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f\") " Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.309797 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-utilities\") pod \"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f\" (UID: \"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f\") " Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.311070 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-utilities" (OuterVolumeSpecName: "utilities") pod "ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f" (UID: "ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.330903 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-kube-api-access-fd7j9" (OuterVolumeSpecName: "kube-api-access-fd7j9") pod "ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f" (UID: "ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f"). InnerVolumeSpecName "kube-api-access-fd7j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.344422 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f" (UID: "ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.412174 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd7j9\" (UniqueName: \"kubernetes.io/projected/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-kube-api-access-fd7j9\") on node \"crc\" DevicePath \"\"" Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.412213 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.412227 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.881844 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cwj" event={"ID":"ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f","Type":"ContainerDied","Data":"e2cc75ca2c4c2a6f4cb64d73cec6f4d01826aba377f112ad8ad347997d3aac66"} Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.881891 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9cwj" Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.881907 4813 scope.go:117] "RemoveContainer" containerID="d8036888ad61e95434a62322ea8545eb942e18fcc47ea4716c21466689c9b3c3" Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.901732 4813 scope.go:117] "RemoveContainer" containerID="36c9143587a810e5c1fab0dfcfac2d9f94a5154d5498ff01abe92acd5c7c4e76" Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.924289 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cwj"] Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.934433 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cwj"] Feb 17 09:17:44 crc kubenswrapper[4813]: I0217 09:17:44.940009 4813 scope.go:117] "RemoveContainer" containerID="0a9313d177666763f2a10a91a3918cb5de52396dc97a1f686720c7d46ba1e765" Feb 17 09:17:45 crc kubenswrapper[4813]: I0217 09:17:45.119623 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f" path="/var/lib/kubelet/pods/ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f/volumes" Feb 17 09:17:46 crc kubenswrapper[4813]: I0217 09:17:46.117577 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:17:46 crc kubenswrapper[4813]: E0217 09:17:46.118262 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:17:57 crc kubenswrapper[4813]: I0217 09:17:57.783950 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g8ptp"] Feb 17 09:17:57 crc kubenswrapper[4813]: E0217 09:17:57.784961 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f" containerName="extract-content" Feb 17 09:17:57 crc kubenswrapper[4813]: I0217 09:17:57.784982 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f" containerName="extract-content" Feb 17 09:17:57 crc kubenswrapper[4813]: E0217 09:17:57.785007 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f" containerName="registry-server" Feb 17 09:17:57 crc kubenswrapper[4813]: I0217 09:17:57.785018 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f" containerName="registry-server" Feb 17 09:17:57 crc kubenswrapper[4813]: E0217 09:17:57.785045 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f" containerName="extract-utilities" Feb 17 09:17:57 crc kubenswrapper[4813]: I0217 09:17:57.785063 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f" containerName="extract-utilities" Feb 17 09:17:57 crc kubenswrapper[4813]: I0217 09:17:57.785352 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2b2c34-cd63-4ca4-89fc-e33c15a0f90f" containerName="registry-server" Feb 17 09:17:57 crc kubenswrapper[4813]: I0217 09:17:57.786864 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:17:57 crc kubenswrapper[4813]: I0217 09:17:57.803461 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g8ptp"] Feb 17 09:17:57 crc kubenswrapper[4813]: I0217 09:17:57.903843 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15fa3a52-7959-4598-a590-8091bf1c4592-catalog-content\") pod \"community-operators-g8ptp\" (UID: \"15fa3a52-7959-4598-a590-8091bf1c4592\") " pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:17:57 crc kubenswrapper[4813]: I0217 09:17:57.903945 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15fa3a52-7959-4598-a590-8091bf1c4592-utilities\") pod \"community-operators-g8ptp\" (UID: \"15fa3a52-7959-4598-a590-8091bf1c4592\") " pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:17:57 crc kubenswrapper[4813]: I0217 09:17:57.903972 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mxmn\" (UniqueName: \"kubernetes.io/projected/15fa3a52-7959-4598-a590-8091bf1c4592-kube-api-access-2mxmn\") pod \"community-operators-g8ptp\" (UID: \"15fa3a52-7959-4598-a590-8091bf1c4592\") " pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:17:58 crc kubenswrapper[4813]: I0217 09:17:58.004878 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15fa3a52-7959-4598-a590-8091bf1c4592-catalog-content\") pod \"community-operators-g8ptp\" (UID: \"15fa3a52-7959-4598-a590-8091bf1c4592\") " pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:17:58 crc kubenswrapper[4813]: I0217 09:17:58.004963 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15fa3a52-7959-4598-a590-8091bf1c4592-utilities\") pod \"community-operators-g8ptp\" (UID: \"15fa3a52-7959-4598-a590-8091bf1c4592\") " pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:17:58 crc kubenswrapper[4813]: I0217 09:17:58.004983 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mxmn\" (UniqueName: \"kubernetes.io/projected/15fa3a52-7959-4598-a590-8091bf1c4592-kube-api-access-2mxmn\") pod \"community-operators-g8ptp\" (UID: \"15fa3a52-7959-4598-a590-8091bf1c4592\") " pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:17:58 crc kubenswrapper[4813]: I0217 09:17:58.005391 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15fa3a52-7959-4598-a590-8091bf1c4592-utilities\") pod \"community-operators-g8ptp\" (UID: \"15fa3a52-7959-4598-a590-8091bf1c4592\") " pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:17:58 crc kubenswrapper[4813]: I0217 09:17:58.005600 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15fa3a52-7959-4598-a590-8091bf1c4592-catalog-content\") pod \"community-operators-g8ptp\" (UID: \"15fa3a52-7959-4598-a590-8091bf1c4592\") " pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:17:58 crc kubenswrapper[4813]: I0217 09:17:58.027717 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mxmn\" (UniqueName: \"kubernetes.io/projected/15fa3a52-7959-4598-a590-8091bf1c4592-kube-api-access-2mxmn\") pod \"community-operators-g8ptp\" (UID: \"15fa3a52-7959-4598-a590-8091bf1c4592\") " pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:17:58 crc kubenswrapper[4813]: I0217 09:17:58.109367 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:17:58 crc kubenswrapper[4813]: I0217 09:17:58.607970 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g8ptp"] Feb 17 09:17:58 crc kubenswrapper[4813]: W0217 09:17:58.618484 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15fa3a52_7959_4598_a590_8091bf1c4592.slice/crio-e5b36a1d308f19fda82caddb100b1d798a9d970aab8a23705172db72fe0ea8b0 WatchSource:0}: Error finding container e5b36a1d308f19fda82caddb100b1d798a9d970aab8a23705172db72fe0ea8b0: Status 404 returned error can't find the container with id e5b36a1d308f19fda82caddb100b1d798a9d970aab8a23705172db72fe0ea8b0 Feb 17 09:17:59 crc kubenswrapper[4813]: I0217 09:17:59.024610 4813 generic.go:334] "Generic (PLEG): container finished" podID="15fa3a52-7959-4598-a590-8091bf1c4592" containerID="782a49b98bbdb34651090782ec5fc3a65299cdfc909415dbc8dce29e6967095c" exitCode=0 Feb 17 09:17:59 crc kubenswrapper[4813]: I0217 09:17:59.024645 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8ptp" event={"ID":"15fa3a52-7959-4598-a590-8091bf1c4592","Type":"ContainerDied","Data":"782a49b98bbdb34651090782ec5fc3a65299cdfc909415dbc8dce29e6967095c"} Feb 17 09:17:59 crc kubenswrapper[4813]: I0217 09:17:59.024667 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8ptp" event={"ID":"15fa3a52-7959-4598-a590-8091bf1c4592","Type":"ContainerStarted","Data":"e5b36a1d308f19fda82caddb100b1d798a9d970aab8a23705172db72fe0ea8b0"} Feb 17 09:18:00 crc kubenswrapper[4813]: I0217 09:18:00.034686 4813 generic.go:334] "Generic (PLEG): container finished" podID="15fa3a52-7959-4598-a590-8091bf1c4592" containerID="c0cb9824f5b95833da62c19714628a3a45b26ecd95dcc41597a1607163e6d34a" exitCode=0 Feb 17 09:18:00 crc kubenswrapper[4813]: I0217 09:18:00.034957 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8ptp" event={"ID":"15fa3a52-7959-4598-a590-8091bf1c4592","Type":"ContainerDied","Data":"c0cb9824f5b95833da62c19714628a3a45b26ecd95dcc41597a1607163e6d34a"} Feb 17 09:18:00 crc kubenswrapper[4813]: I0217 09:18:00.110920 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:18:00 crc kubenswrapper[4813]: E0217 09:18:00.111202 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:18:01 crc kubenswrapper[4813]: I0217 09:18:01.045751 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8ptp" event={"ID":"15fa3a52-7959-4598-a590-8091bf1c4592","Type":"ContainerStarted","Data":"cbd8fb19f30bd3fc03de54b76e6876c11ee2ae233896f7719a539f67e3105120"} Feb 17 09:18:01 crc kubenswrapper[4813]: I0217 09:18:01.067258 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g8ptp" podStartSLOduration=2.652102389 podStartE2EDuration="4.067233118s" podCreationTimestamp="2026-02-17 09:17:57 +0000 UTC" firstStartedPulling="2026-02-17 09:17:59.026975461 +0000 UTC m=+2226.687736724" lastFinishedPulling="2026-02-17 09:18:00.44210622 +0000 UTC m=+2228.102867453" observedRunningTime="2026-02-17 09:18:01.061642709 +0000 UTC m=+2228.722403952" watchObservedRunningTime="2026-02-17 09:18:01.067233118 +0000 UTC m=+2228.727994351" Feb 17 09:18:03 crc kubenswrapper[4813]: I0217 09:18:03.637122 4813 scope.go:117] "RemoveContainer" containerID="9424e398cdf54a3927e89fd28e89ac0a89df5e476bcc8675d3326c4502ac6fd9" Feb 17 09:18:08 crc kubenswrapper[4813]: I0217 09:18:08.109600 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:18:08 crc kubenswrapper[4813]: I0217 09:18:08.110305 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:18:08 crc kubenswrapper[4813]: I0217 09:18:08.183348 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:18:09 crc kubenswrapper[4813]: I0217 09:18:09.201970 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:18:11 crc kubenswrapper[4813]: I0217 09:18:11.773373 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g8ptp"] Feb 17 09:18:11 crc kubenswrapper[4813]: I0217 09:18:11.774034 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g8ptp" podUID="15fa3a52-7959-4598-a590-8091bf1c4592" containerName="registry-server" containerID="cri-o://cbd8fb19f30bd3fc03de54b76e6876c11ee2ae233896f7719a539f67e3105120" gracePeriod=2 Feb 17 09:18:12 crc kubenswrapper[4813]: I0217 09:18:12.169279 4813 generic.go:334] "Generic (PLEG): container finished" podID="15fa3a52-7959-4598-a590-8091bf1c4592" containerID="cbd8fb19f30bd3fc03de54b76e6876c11ee2ae233896f7719a539f67e3105120" exitCode=0 Feb 17 09:18:12 crc kubenswrapper[4813]: I0217 09:18:12.169397 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8ptp" event={"ID":"15fa3a52-7959-4598-a590-8091bf1c4592","Type":"ContainerDied","Data":"cbd8fb19f30bd3fc03de54b76e6876c11ee2ae233896f7719a539f67e3105120"} Feb 17 09:18:12 crc kubenswrapper[4813]: I0217 09:18:12.169446 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g8ptp" event={"ID":"15fa3a52-7959-4598-a590-8091bf1c4592","Type":"ContainerDied","Data":"e5b36a1d308f19fda82caddb100b1d798a9d970aab8a23705172db72fe0ea8b0"} Feb 17 09:18:12 crc kubenswrapper[4813]: I0217 09:18:12.169474 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5b36a1d308f19fda82caddb100b1d798a9d970aab8a23705172db72fe0ea8b0" Feb 17 09:18:12 crc kubenswrapper[4813]: I0217 09:18:12.207782 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:18:12 crc kubenswrapper[4813]: I0217 09:18:12.354273 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15fa3a52-7959-4598-a590-8091bf1c4592-catalog-content\") pod \"15fa3a52-7959-4598-a590-8091bf1c4592\" (UID: \"15fa3a52-7959-4598-a590-8091bf1c4592\") " Feb 17 09:18:12 crc kubenswrapper[4813]: I0217 09:18:12.354462 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15fa3a52-7959-4598-a590-8091bf1c4592-utilities\") pod \"15fa3a52-7959-4598-a590-8091bf1c4592\" (UID: \"15fa3a52-7959-4598-a590-8091bf1c4592\") " Feb 17 09:18:12 crc kubenswrapper[4813]: I0217 09:18:12.354504 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mxmn\" (UniqueName: \"kubernetes.io/projected/15fa3a52-7959-4598-a590-8091bf1c4592-kube-api-access-2mxmn\") pod \"15fa3a52-7959-4598-a590-8091bf1c4592\" (UID: \"15fa3a52-7959-4598-a590-8091bf1c4592\") " Feb 17 09:18:12 crc kubenswrapper[4813]: I0217 09:18:12.356691 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15fa3a52-7959-4598-a590-8091bf1c4592-utilities" (OuterVolumeSpecName: "utilities") pod "15fa3a52-7959-4598-a590-8091bf1c4592" (UID: "15fa3a52-7959-4598-a590-8091bf1c4592"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:18:12 crc kubenswrapper[4813]: I0217 09:18:12.360498 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15fa3a52-7959-4598-a590-8091bf1c4592-kube-api-access-2mxmn" (OuterVolumeSpecName: "kube-api-access-2mxmn") pod "15fa3a52-7959-4598-a590-8091bf1c4592" (UID: "15fa3a52-7959-4598-a590-8091bf1c4592"). InnerVolumeSpecName "kube-api-access-2mxmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 09:18:12 crc kubenswrapper[4813]: I0217 09:18:12.411306 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15fa3a52-7959-4598-a590-8091bf1c4592-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15fa3a52-7959-4598-a590-8091bf1c4592" (UID: "15fa3a52-7959-4598-a590-8091bf1c4592"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 09:18:12 crc kubenswrapper[4813]: I0217 09:18:12.456275 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15fa3a52-7959-4598-a590-8091bf1c4592-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 09:18:12 crc kubenswrapper[4813]: I0217 09:18:12.456575 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15fa3a52-7959-4598-a590-8091bf1c4592-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 09:18:12 crc kubenswrapper[4813]: I0217 09:18:12.456704 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mxmn\" (UniqueName: \"kubernetes.io/projected/15fa3a52-7959-4598-a590-8091bf1c4592-kube-api-access-2mxmn\") on node \"crc\" DevicePath \"\"" Feb 17 09:18:13 crc kubenswrapper[4813]: I0217 09:18:13.178156 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g8ptp" Feb 17 09:18:13 crc kubenswrapper[4813]: I0217 09:18:13.206491 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g8ptp"] Feb 17 09:18:13 crc kubenswrapper[4813]: I0217 09:18:13.218199 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g8ptp"] Feb 17 09:18:15 crc kubenswrapper[4813]: I0217 09:18:15.110772 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:18:15 crc kubenswrapper[4813]: E0217 09:18:15.111050 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:18:15 crc kubenswrapper[4813]: I0217 09:18:15.122794 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15fa3a52-7959-4598-a590-8091bf1c4592" path="/var/lib/kubelet/pods/15fa3a52-7959-4598-a590-8091bf1c4592/volumes" Feb 17 09:18:26 crc kubenswrapper[4813]: I0217 09:18:26.111345 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:18:26 crc kubenswrapper[4813]: E0217 09:18:26.112278 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:18:40 crc kubenswrapper[4813]: I0217 09:18:40.111387 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:18:40 crc kubenswrapper[4813]: E0217 09:18:40.112402 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:18:51 crc kubenswrapper[4813]: I0217 09:18:51.112041 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:18:51 crc kubenswrapper[4813]: E0217 09:18:51.113442 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:19:05 crc kubenswrapper[4813]: I0217 09:19:05.111556 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:19:05 crc kubenswrapper[4813]: E0217 09:19:05.112333 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:19:19 crc kubenswrapper[4813]: I0217 09:19:19.111526 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:19:19 crc kubenswrapper[4813]: E0217 09:19:19.112559 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:19:34 crc kubenswrapper[4813]: I0217 09:19:34.111819 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:19:34 crc kubenswrapper[4813]: E0217 09:19:34.112630 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:19:47 crc kubenswrapper[4813]: I0217 09:19:47.111803 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:19:47 crc kubenswrapper[4813]: E0217 09:19:47.112686 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:20:02 crc kubenswrapper[4813]: I0217 09:20:02.111067 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:20:02 crc kubenswrapper[4813]: E0217 09:20:02.111830 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:20:15 crc kubenswrapper[4813]: I0217 09:20:15.111900 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:20:15 crc kubenswrapper[4813]: E0217 09:20:15.112918 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:20:26 crc kubenswrapper[4813]: I0217 09:20:26.111888 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:20:26 crc kubenswrapper[4813]: E0217 09:20:26.112898 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:20:41 crc kubenswrapper[4813]: I0217 09:20:41.111165 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:20:41 crc kubenswrapper[4813]: E0217 09:20:41.111875 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:20:53 crc kubenswrapper[4813]: I0217 09:20:53.128909 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:20:53 crc kubenswrapper[4813]: E0217 09:20:53.129577 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:21:08 crc kubenswrapper[4813]: I0217 09:21:08.112024 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:21:08 crc kubenswrapper[4813]: E0217 09:21:08.112692 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:21:23 crc kubenswrapper[4813]: I0217 09:21:23.117217 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:21:23 crc kubenswrapper[4813]: E0217 09:21:23.117931 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:21:37 crc kubenswrapper[4813]: I0217 09:21:37.116486 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:21:37 crc kubenswrapper[4813]: E0217 09:21:37.117051 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:21:52 crc kubenswrapper[4813]: I0217 09:21:52.111278 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:21:52 crc kubenswrapper[4813]: E0217 09:21:52.112268 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:22:03 crc kubenswrapper[4813]: I0217 09:22:03.119931 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:22:03 crc kubenswrapper[4813]: E0217 09:22:03.123004 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" Feb 17 09:22:18 crc kubenswrapper[4813]: I0217 09:22:18.111135 4813 scope.go:117] "RemoveContainer" containerID="1334b1367c55329f6760e1de4b8e57798320616a09a80d13d44ccb129ddb051c" Feb 17 09:22:18 crc kubenswrapper[4813]: E0217 09:22:18.111886 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-w2pz7_openshift-machine-config-operator(3a6ba827-b08b-4163-b067-d9adb119398d)\"" pod="openshift-machine-config-operator/machine-config-daemon-w2pz7" podUID="3a6ba827-b08b-4163-b067-d9adb119398d" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515145031540024443 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015145031541017361 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015145024362016507 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015145024362015457 5ustar corecore